关注 spark技术分享,
撸spark源码 玩spark最佳实践

groupByKey Operator — Streaming Aggregation (with Explicit State Logic)

groupByKey Operator — Streaming Aggregation (with Explicit State Logic)

groupByKey operator is used to combine rows (of type T) into KeyValueGroupedDataset with the keys (of type K) being generated by a func key-generating function and the values collections of one or more rows associated with a key.

groupByKey uses a func function that takes a row (of type T) and gives the group key (of type K) the row is associated with.

Note
The type of the input argument of func is the type of rows in the Dataset (i.e. Dataset[T]).

groupByKey might group together customer orders from the same postal code (wherein the “key” would be the postal code of each individual order, and the “value” would be the order itself).

The following example code shows how to apply groupByKey operator to a structured stream of timestamped values of different devices.

Internally, creates a KeyValueGroupedDataset with the following:

  • Encoders for K keys and T rows

  • QueryExecution for AppendColumns unary logical operator with the func function and the analyzed logical plan of the Dataset (groupBy is executed on)

  • Grouping attributes

Credits

  • The example with customer orders and postal codes is borrowed from Apache Beam’s Using GroupByKey Programming Guide.

赞(0) 打赏
未经允许不得转载:spark技术分享 » groupByKey Operator — Streaming Aggregation (with Explicit State Logic)
分享到: 更多 (0)

关注公众号:spark技术分享

联系我们联系我们

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

微信扫一扫打赏