关注 spark技术分享,
撸spark源码 玩spark最佳实践

ExecutorSource

ExecutorSource

ExecutorSource is a metrics source of an Executor. It uses an executor’s threadPool for calculating the gauges.

Note
Every executor has its own separate ExecutorSource that is registered when CoarseGrainedExecutorBackend receives a RegisteredExecutor.

The name of a ExecutorSource is executor.

spark executorsource jconsole.png
Figure 1. ExecutorSource in JConsole (using Spark Standalone)
Table 1. ExecutorSource Gauges
Gauge Description

threadpool.activeTasks

Approximate number of threads that are actively executing tasks.

Uses ThreadPoolExecutor.getActiveCount().

threadpool.completeTasks

Approximate total number of tasks that have completed execution.

Uses ThreadPoolExecutor.getCompletedTaskCount().

threadpool.currentPool_size

Current number of threads in the pool.

Uses ThreadPoolExecutor.getPoolSize().

threadpool.maxPool_size

Maximum allowed number of threads that have ever simultaneously been in the pool

Uses ThreadPoolExecutor.getMaximumPoolSize().

filesystem.hdfs.read_bytes

Uses Hadoop’s FileSystem.getAllStatistics() and getBytesRead().

filesystem.hdfs.write_bytes

Uses Hadoop’s FileSystem.getAllStatistics() and getBytesWritten().

filesystem.hdfs.read_ops

Uses Hadoop’s FileSystem.getAllStatistics() and getReadOps()

filesystem.hdfs.largeRead_ops

Uses Hadoop’s FileSystem.getAllStatistics() and getLargeReadOps().

filesystem.hdfs.write_ops

Uses Hadoop’s FileSystem.getAllStatistics() and getWriteOps().

filesystem.file.read_bytes

The same as hdfs but for file scheme.

filesystem.file.write_bytes

The same as hdfs but for file scheme.

filesystem.file.read_ops

The same as hdfs but for file scheme.

filesystem.file.largeRead_ops

The same as hdfs but for file scheme.

filesystem.file.write_ops

The same as hdfs but for file scheme.

赞(0) 打赏
未经允许不得转载:spark技术分享 » ExecutorSource
分享到: 更多 (0)

关注公众号:spark技术分享

联系我们联系我们

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

微信扫一扫打赏