关注 spark技术分享,
撸spark源码 玩spark最佳实践

Shuffling

RDD shuffling

Tip
Read the official documentation about the topic Shuffle operations. It is still better than this page.

Shuffling is a process of redistributing data across partitions (aka repartitioning) that may or may not cause moving data across JVM processes or even over the wire (between executors on separate machines).

Shuffling is the process of data transfer between stages.

Tip
Avoid shuffling at all cost. Think about ways to leverage existing partitions. Leverage partial aggregation to reduce data transfer.

By default, shuffling doesn’t change the number of partitions, but their content.

  • Avoid groupByKey and use reduceByKey or combineByKey instead.

    • groupByKey shuffles all the data, which is slow.

    • reduceByKey shuffles only the results of sub-aggregations in each partition of the data.

Example – join

PairRDD offers join transformation that (quoting the official documentation):

When called on datasets of type (K, V) and (K, W), returns a dataset of (K, (V, W)) pairs with all pairs of elements for each key.

Let’s have a look at an example and see how it works under the covers:

It doesn’t look good when there is an “angle” between “nodes” in an operation graph. It appears before the join operation so shuffle is expected.

Here is how the job of executing joined.count looks in Web UI.

spark shuffle join webui.png
Figure 1. Executing joined.count

The screenshot of Web UI shows 3 stages with two parallelize to Shuffle Write and count to Shuffle Read. It means shuffling has indeed happened.

Caution
FIXME Just learnt about sc.range(0, 5) as a shorter version of sc.parallelize(0 to 5)

join operation is one of the cogroup operations that uses defaultPartitioner, i.e. walks through the RDD lineage graph (sorted by the number of partitions decreasing) and picks the partitioner with positive number of output partitions. Otherwise, it checks spark.default.parallelism property and if defined picks HashPartitioner with the default parallelism of the SchedulerBackend.

join is almost CoGroupedRDD.mapValues.

Caution
FIXME the default parallelism of scheduler backend
赞(0) 打赏
未经允许不得转载:spark技术分享 » Shuffling
分享到: 更多 (0)

关注公众号:spark技术分享

联系我们联系我们

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

微信扫一扫打赏