关注 spark技术分享,
撸spark源码 玩spark最佳实践

Dataset — Structured Query with Data Encoder

Dataset — Structured Query with Data Encoder

Dataset is a strongly-typed data structure in Spark SQL that represents a structured query.

Note
A structured query can be written using SQL or Dataset API.

The following figure shows the relationship between different entities of Spark SQL that all together give the Dataset data structure.

spark sql Dataset.png
Figure 1. Dataset’s Internals

It is therefore fair to say that Dataset consists of the following three elements:

  1. QueryExecution (with the parsed unanalyzed LogicalPlan of a structured query)

  2. Encoder (of the type of the records for fast serialization and deserialization to and from InternalRow)

  3. SparkSession

When created, Dataset takes such a 3-element tuple with a SparkSession, a QueryExecution and an Encoder.

Dataset is created when:

Datasets are lazy and structured query operators and expressions are only triggered when an action is invoked.

The Dataset API offers declarative and type-safe operators that makes for an improved experience for data processing (comparing to DataFrames that were a set of index- or column name-based Rows).

Note

Dataset was first introduced in Apache Spark 1.6.0 as an experimental feature, and has since turned itself into a fully supported API.

As of Spark 2.0.0, DataFrame – the flagship data abstraction of previous versions of Spark SQL – is currently a mere type alias for Dataset[Row]:

Dataset offers convenience of RDDs with the performance optimizations of DataFrames and the strong static type-safety of Scala. The last feature of bringing the strong type-safety to DataFrame makes Dataset so appealing. All the features together give you a more functional programming interface to work with structured data.

It is only with Datasets to have syntax and analysis checks at compile time (that was not possible using DataFrame, regular SQL queries or even RDDs).

Using Dataset objects turns DataFrames of Row instances into a DataFrames of case classes with proper names and types (following their equivalents in the case classes). Instead of using indices to access respective fields in a DataFrame and cast it to a type, all this is automatically handled by Datasets and checked by the Scala compiler.

If however a LogicalPlan is used to create a Dataset, the logical plan is first executed (using the current SessionState in the SparkSession) that yields the QueryExecution plan.

A Dataset is Queryable and Serializable, i.e. can be saved to a persistent storage.

Note
SparkSession and QueryExecution are transient attributes of a Dataset and therefore do not participate in Dataset serialization. The only firmly-tied feature of a Dataset is the Encoder.

You can request the “untyped” view of a Dataset or access the RDD that is generated after executing the query. It is supposed to give you a more pleasant experience while transitioning from the legacy RDD-based or DataFrame-based APIs you may have used in the earlier versions of Spark SQL or encourage migrating from Spark Core’s RDD API to Spark SQL’s Dataset API.

The default storage level for Datasets is MEMORY_AND_DISK because recomputing the in-memory columnar representation of the underlying table is expensive. You can however persist a Dataset.

Note
Spark 2.0 has introduced a new query model called Structured Streaming for continuous incremental execution of structured queries. That made possible to consider Datasets a static and bounded as well as streaming and unbounded data sets with a single unified API for different execution models.

A Dataset is local if it was created from local collections using SparkSession.emptyDataset or SparkSession.createDataset methods and their derivatives like toDF. If so, the queries on the Dataset can be optimized and run locally, i.e. without using Spark executors.

Note
Dataset makes sure that the underlying QueryExecution is analyzed and checked.
Table 1. Dataset’s Properties
Name Description

boundEnc

ExpressionEncoder

Used when…​FIXME

deserializer

Deserializer expression to convert internal rows to objects of type T

Created lazily by requesting the ExpressionEncoder to resolveAndBind

Used when:

exprEnc

Implicit ExpressionEncoder

Used when…​FIXME

logicalPlan

Analyzed logical plan with all logical commands executed and turned into a LocalRelation.

When initialized, logicalPlan requests the QueryExecution for analyzed logical plan. If the plan is a logical command or a union thereof, logicalPlan executes the QueryExecution (using executeCollect).

planWithBarrier

rdd

(lazily-created) RDD of JVM objects of type T (as converted from rows in Dataset in the internal binary row format).

Note
rdd gives RDD with the extra execution step to convert rows from their internal binary row format to JVM objects that will impact the JVM memory as the objects are inside JVM (while were outside before). You should not use rdd directly.

Internally, rdd first creates a new logical plan that deserializes the Dataset’s logical plan.

rdd then requests SessionState to execute the logical plan to get the corresponding RDD of binary rows.

Note
rdd uses SparkSession to access SessionState.

rdd then requests the Dataset’s ExpressionEncoder for the data type of the rows (using deserializer expression) and maps over them (per partition) to create records of the expected type T.

Note
rdd is at the “boundary” between the internal binary row format and the JVM type of the dataset. Avoid the extra deserialization step to lower JVM memory requirements of your Spark application.

sqlContext

Lazily-created SQLContext

Used when…​FIXME

Getting Input Files of Relations (in Structured Query) — inputFiles Method

inputFiles requests QueryExecution for optimized logical plan and collects the following logical operators:

inputFiles then requests the logical operators for their underlying files:

resolve Internal Method

Caution
FIXME

Creating Dataset Instance

Dataset takes the following when created:

Note
You can also create a Dataset using LogicalPlan that is immediately executed using SessionState.

Internally, Dataset requests QueryExecution to analyze itself.

Dataset initializes the internal registries and counters.

Is Dataset Local? — isLocal Method

isLocal flag is enabled (i.e. true) when operators like collect or take could be run locally, i.e. without using executors.

Internally, isLocal checks whether the logical query plan of a Dataset is LocalRelation.

Is Dataset Streaming? — isStreaming method

isStreaming is enabled (i.e. true) when the logical plan is streaming.

Internally, isStreaming takes the Dataset’s logical plan and gives whether the plan is streaming or not.

Queryable

Caution
FIXME

withNewRDDExecutionId Internal Method

withNewRDDExecutionId executes the input body action under new execution id.

Caution
FIXME What’s the difference between withNewRDDExecutionId and withNewExecutionId?
Note
withNewRDDExecutionId is used when Dataset.foreach and Dataset.foreachPartition actions are used.

Creating DataFrame (For Logical Query Plan and SparkSession) — ofRows Internal Factory Method

Note
ofRows is part of Dataset Scala object that is marked as a private[sql] and so can only be accessed from code in org.apache.spark.sql package.

ofRows returns DataFrame (which is the type alias for Dataset[Row]). ofRows uses RowEncoder to convert the schema (based on the input logicalPlan logical plan).

Internally, ofRows prepares the input logicalPlan for execution and creates a Dataset[Row] with the current SparkSession, the QueryExecution and RowEncoder.

Note

ofRows is used when:

Tracking Multi-Job Structured Query Execution (PySpark) — withNewExecutionId Internal Method

withNewExecutionId executes the input body action under new execution id.

Note
withNewExecutionId sets a unique execution id so that all Spark jobs belong to the Dataset action execution.
Note

withNewExecutionId is used exclusively when Dataset is executing Python-based actions (i.e. collectToPython, collectAsArrowToPython and toPythonIterator) that are not of much interest in this gitbook.

Feel free to contact me at jacek@japila.pl if you think I should re-consider my decision.

Executing Action Under New Execution ID — withAction Internal Method

withAction requests QueryExecution for the optimized physical query plan and resets the metrics of every physical operator (in the physical plan).

withAction requests SQLExecution to execute the input action with the executable physical plan (tracked under a new execution id).

In the end, withAction notifies ExecutionListenerManager that the name action has finished successfully or with an exception.

Note
withAction uses SparkSession to access ExecutionListenerManager.
Note

withAction is used when Dataset is requested for the following:

Creating Dataset Instance (For LogicalPlan and SparkSession) — apply Internal Factory Method

Note
apply is part of Dataset Scala object that is marked as a private[sql] and so can only be accessed from code in org.apache.spark.sql package.

apply…​FIXME

Note

apply is used when:

Collecting All Rows From Spark Plan — collectFromPlan Internal Method

collectFromPlan…​FIXME

Note
collectFromPlan is used for Dataset.head, Dataset.collect and Dataset.collectAsList operators.

selectUntyped Internal Method

selectUntyped…​FIXME

Note
selectUntyped is used exclusively when Dataset.select typed transformation is used.

Helper Method for Typed Transformations — withTypedPlan Internal Method

withTypedPlan…​FIXME

Note
withTypedPlan is annotated with Scala’s @inline annotation that requests the Scala compiler to try especially hard to inline it.
Note
withTypedPlan is used in the Dataset typed transformations, i.e. withWatermark, joinWith, hint, as, filter, limit, sample, dropDuplicates, filter, map, repartition, repartitionByRange, coalesce and sort with sortWithinPartitions (through the sortInternal internal method).

Helper Method for Set-Based Typed Transformations — withSetOperator Internal Method

withSetOperator…​FIXME

Note
withSetOperator is annotated with Scala’s @inline annotation that requests the Scala compiler to try especially hard to inline it.
Note
withSetOperator is used in the Dataset typed transformations, i.e. union, unionByName, intersect and except.

sortInternal Internal Method

sortInternal creates a Dataset with Sort unary logical operator (and the logicalPlan as the child logical plan).

Internally, sortInternal firstly builds ordering expressions for the given sortExprs columns, i.e. takes the sortExprs columns and makes sure that they are SortOrder expressions already (and leaves them untouched) or wraps them into SortOrder expressions with Ascending sort direction.

In the end, sortInternal creates a Dataset with Sort unary logical operator (with the ordering expressions, the given global flag, and the logicalPlan as the child logical plan).

Note
sortInternal is used for the sort and sortWithinPartitions typed transformations in the Dataset API (with the only change of the global flag being enabled and disabled, respectively).

Helper Method for Untyped Transformations and Basic Actions — withPlan Internal Method

withPlan simply uses ofRows internal factory method to create a DataFrame for the input LogicalPlan and the current SparkSession.

Note
withPlan is annotated with Scala’s @inline annotation that requests the Scala compiler to try especially hard to inline it.

Further Reading and Watching

赞(0) 打赏
未经允许不得转载:spark技术分享 » Dataset — Structured Query with Data Encoder
分享到: 更多 (0)

关注公众号:spark技术分享

联系我们联系我们

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

微信扫一扫打赏