关注 spark技术分享,
撸spark源码 玩spark最佳实践

HadoopFileLinesReader

HadoopFileLinesReader

HadoopFileLinesReader is a Scala Iterator of Apache Hadoop’s org.apache.hadoop.io.Text.

HadoopFileLinesReader is created to access datasets in the following data sources:

  • SimpleTextSource

  • LibSVMFileFormat

  • TextInputCSVDataSource

  • TextInputJsonDataSource

  • TextFileFormat

HadoopFileLinesReader uses the internal iterator that handles accessing files using Hadoop’s FileSystem API.

Creating HadoopFileLinesReader Instance

HadoopFileLinesReader takes the following when created:

iterator Internal Property

When created, HadoopFileLinesReader creates an internal iterator that uses Hadoop’s org.apache.hadoop.mapreduce.lib.input.FileSplit with Hadoop’s org.apache.hadoop.fs.Path and file.

iterator creates Hadoop’s TaskAttemptID, TaskAttemptContextImpl and LineRecordReader.

iterator initializes LineRecordReader and passes it on to RecordReaderIterator.

Note
iterator is used for Iterator-specific methods, i.e. hasNext, next and close.
赞(0) 打赏
未经允许不得转载:spark技术分享 » HadoopFileLinesReader
分享到: 更多 (0)

关注公众号:spark技术分享

联系我们联系我们

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

微信扫一扫打赏