关注 spark技术分享,
撸spark源码 玩spark最佳实践

DataStreamReader — Loading Data from Streaming Data Source

DataStreamReader — Loading Data from Streaming Data Source

DataStreamReader is the interface to describe how data is loaded to a streaming Dataset from a streaming source.

Table 1. DataStreamReader’s Methods
Method Description

csv

Sets csv as the format of the data source

format

Specifies the format of the data source

The format is used internally as the name (alias) of the streaming source to use to load the data

json

Sets json as the format of the data source

load

  1. Explicit path (that could also be specified as an option)

Creates a streaming DataFrame (that is internally a logical plan with a StreamingRelationV2 or StreamingRelation leaf logical operators)

option

Sets a loading option

options

Specifies the configuration options of a data source

Note
You could use option method if you prefer specifying the options one by one or there is only one in use.

orc

Sets orc as the format of the data source

parquet

Sets parquet as the format of the data source

schema

  1. Uses a DDL-formatted table schema

Specifies the user-defined schema of the streaming data source (as a StructType or DDL-formatted table schema, e.g. a INT, b STRING)

text

Sets text as the format of the data source

textFile

DataStreamReader SparkSession StreamingRelation.png
Figure 1. DataStreamReader and The Others

DataStreamReader is used for a Spark developer to describe how Spark Structured Streaming loads datasets from a streaming source (that in the end creates a logical plan for a streaming query).

Note
DataStreamReader is the Spark developer-friendly API to create a StreamingRelation logical operator (that represents a streaming source in a logical plan).

You can access DataStreamReader using SparkSession.readStream method.

DataStreamReader supports many source formats natively and offers the interface to define custom formats:

Note
DataStreamReader assumes parquet file format by default that you can change using spark.sql.sources.default property.
Note
hive source format is not supported.

After you have described the streaming pipeline to read datasets from an external streaming data source, you eventually trigger the loading using format-agnostic load or format-specific (e.g. json, csv) operators.

Table 2. DataStreamReader’s Internal Properties (in alphabetical order)
Name Initial Value Description

source

spark.sql.sources.default property

Source format of datasets in a streaming data source

userSpecifiedSchema

(empty)

Optional user-defined schema

extraOptions

(empty)

Collection of key-value configuration options

Specifying Loading Options — option Method

option family of methods specifies additional options to a streaming data source.

There is support for values of String, Boolean, Long, and Double types for user convenience, and internally are converted to String type.

Internally, option sets extraOptions internal property.

Note
You can also set options in bulk using options method. You have to do the type conversion yourself, though.

Creating Streaming Dataset (to Represent Loading Data From Streaming Source) — load Method

  1. Specifies path option before passing the call to parameterless load()

load…​FIXME

Built-in Formats

  1. Returns Dataset[String] not DataFrame

DataStreamReader can load streaming datasets from data sources of the following formats:

  • json

  • csv

  • parquet

  • text

The methods simply pass calls to format followed by load(path).

赞(0) 打赏
未经允许不得转载:spark技术分享 » DataStreamReader — Loading Data from Streaming Data Source
分享到: 更多 (0)

关注公众号:spark技术分享

联系我们联系我们

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

微信扫一扫打赏