DiskBlockManager
DiskBlockManager creates and maintains the logical mapping between logical blocks and physical on-disk locations.
By default, one block is mapped to one file with a name given by its BlockId. It is however possible to have a block map to only a segment of a file.
Block files are hashed among the local directories.
|
Note
|
DiskBlockManager is used exclusively by DiskStore and created when BlockManager is created (and passed to DiskStore).
|
| Name | Description | ||
|---|---|---|---|
|
Local directories for storing block data
Used when:
|
|||
|
The value of spark.diskStore.subDirectories Spark configuration property or Used when:
|
|
Tip
|
Enable Add the following line to
Refer to Logging. |
Finding File — getFile Method
|
Caution
|
FIXME |
createTempShuffleBlock Method
|
1 2 3 4 5 |
createTempShuffleBlock(): (TempShuffleBlockId, File) |
createTempShuffleBlock creates a temporary TempShuffleBlockId block.
|
Caution
|
FIXME |
getAllFiles Method
|
1 2 3 4 5 |
getAllFiles(): Seq[File] |
getAllFiles…FIXME
|
Note
|
getAllFiles is used exclusively when DiskBlockManager is requested to getAllBlocks.
|
Creating DiskBlockManager Instance
|
1 2 3 4 5 |
DiskBlockManager(conf: SparkConf, deleteFilesOnStop: Boolean) |
When created, DiskBlockManager uses spark.diskStore.subDirectories to set subDirsPerLocalDir.
DiskBlockManager creates one or many local directories to store block data (as localDirs). When not successful, you should see the following ERROR message in the logs and DiskBlockManager exits with error code 53.
|
1 2 3 4 5 |
ERROR DiskBlockManager: Failed to create any local dir. |
DiskBlockManager initializes the internal subDirs collection of locks for every local directory to store block data with an array of subDirsPerLocalDir size for files.
In the end, DiskBlockManager registers a shutdown hook to clean up the local directories for blocks.
Registering Shutdown Hook — addShutdownHook Internal Method
|
1 2 3 4 5 |
addShutdownHook(): AnyRef |
addShutdownHook registers a shutdown hook to execute doStop at shutdown.
When executed, you should see the following DEBUG message in the logs:
|
1 2 3 4 5 |
DEBUG DiskBlockManager: Adding shutdown hook |
addShutdownHook adds the shutdown hook so it prints the following INFO message and executes doStop.
|
1 2 3 4 5 |
INFO DiskBlockManager: Shutdown hook called |
Stopping DiskBlockManager (Removing Local Directories for Blocks) — doStop Internal Method
|
1 2 3 4 5 |
doStop(): Unit |
doStop deletes the local directories recursively (only when the constructor’s deleteFilesOnStop is enabled and the parent directories are not registered to be removed at shutdown).
Getting Local Directories for Spark to Write Files — Utils.getConfiguredLocalDirs Internal Method
|
1 2 3 4 5 |
getConfiguredLocalDirs(conf: SparkConf): Array[String] |
getConfiguredLocalDirs returns the local directories where Spark can write files.
Internally, getConfiguredLocalDirs uses conf SparkConf to know if External Shuffle Service is enabled (using spark.shuffle.service.enabled).
getConfiguredLocalDirs checks if Spark runs on YARN and if so, returns LOCAL_DIRS-controlled local directories.
In non-YARN mode (or for the driver in yarn-client mode), getConfiguredLocalDirs checks the following environment variables (in the order) and returns the value of the first met:
-
SPARK_EXECUTOR_DIRSenvironment variable -
SPARK_LOCAL_DIRSenvironment variable -
MESOS_DIRECTORYenvironment variable (only when External Shuffle Service is not used)
In the end, when no earlier environment variables were found, getConfiguredLocalDirs uses spark.local.dir Spark property or falls back on java.io.tmpdir System property.
|
Note
|
|
Getting Writable Directories in YARN — getYarnLocalDirs Internal Method
|
1 2 3 4 5 |
getYarnLocalDirs(conf: SparkConf): String |
getYarnLocalDirs uses conf SparkConf to read LOCAL_DIRS environment variable with comma-separated local directories (that have already been created and secured so that only the user has access to them).
getYarnLocalDirs throws an Exception with the message Yarn Local dirs can’t be empty if LOCAL_DIRS environment variable was not set.
Checking If Spark Runs on YARN — isRunningInYarnContainer Internal Method
|
1 2 3 4 5 |
isRunningInYarnContainer(conf: SparkConf): Boolean |
isRunningInYarnContainer uses conf SparkConf to read Hadoop YARN’s CONTAINER_ID environment variable to find out if Spark runs in a YARN container.
|
Note
|
CONTAINER_ID environment variable is exported by YARN NodeManager.
|
Getting All Blocks Stored On Disk — getAllBlocks Method
|
1 2 3 4 5 |
getAllBlocks(): Seq[BlockId] |
getAllBlocks gets all the blocks stored on disk.
Internally, getAllBlocks takes the block files and returns their names (as BlockId).
|
Note
|
getAllBlocks is used exclusively when BlockManager is requested to find IDs of existing blocks for a given filter.
|
Creating Local Directories for Storing Block Data — createLocalDirs Internal Method
|
1 2 3 4 5 |
createLocalDirs(conf: SparkConf): Array[File] |
createLocalDirs creates blockmgr-[random UUID] directory under local directories to store block data.
Internally, createLocalDirs reads local writable directories and creates a subdirectory blockmgr-[random UUID] under every configured parent directory.
If successful, you should see the following INFO message in the logs:
|
1 2 3 4 5 |
INFO DiskBlockManager: Created local directory at [localDir] |
When failed to create a local directory, you should see the following ERROR message in the logs:
|
1 2 3 4 5 |
ERROR DiskBlockManager: Failed to create local dir in [rootDir]. Ignoring this directory. |
|
Note
|
createLocalDirs is used exclusively when localDirs is initialized.
|
stop Internal Method
|
1 2 3 4 5 |
stop(): Unit |
stop…FIXME
|
Note
|
stop is used exclusively when BlockManager is requested to stop.
|
File Locks for Local Block Store Directories — subDirs Internal Property
|
1 2 3 4 5 |
subDirs: Array[Array[File]] |
subDirs is a collection of subDirsPerLocalDir file locks for every local block store directory where DiskBlockManager stores block data (with the columns being the number of local directories and the rows as collection of subDirsPerLocalDir size).
|
Note
|
subDirs(n) is to access n-th local directory.
|
|
Note
|
subDirs is used when DiskBlockManager is requested to getFile or getAllFiles.
|
Settings
| Spark Property | Default Value | Description |
|---|---|---|
|
|
The number of …FIXME |
spark技术分享