关注 spark技术分享,
撸spark源码 玩spark最佳实践

Schema — Structure of Data

Schema — Structure of Data

A schema is the description of the structure of your data (which together create a Dataset in Spark SQL). It can be implicit (and inferred at runtime) or explicit (and known at compile time).

A schema is described using StructType which is a collection of StructField objects (that in turn are tuples of names, types, and nullability classifier).

StructType and StructField belong to the org.apache.spark.sql.types package.

You can use the canonical string representation of SQL types to describe the types in a schema (that is inherently untyped at compile type) or use type-safe types from the org.apache.spark.sql.types package.

Tip
Read up on CatalystSqlParser that is responsible for parsing data types.

It is however recommended to use the singleton DataTypes class with static methods to create schema types.

StructType offers printTreeString that makes presenting the schema more user-friendly.

As of Spark 2.0, you can describe the schema of your strongly-typed datasets using encoders.

Implicit Schema

赞(0) 打赏
未经允许不得转载:spark技术分享 » Schema — Structure of Data
分享到: 更多 (0)

关注公众号:spark技术分享

联系我们联系我们

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

微信扫一扫打赏