Hive Metastore
Spark SQL uses a Hive metastore to manage the metadata of persistent relational entities (e.g. databases, tables, columns, partitions) in a relational database (for fast access).
A Hive metastore warehouse (aka spark-warehouse) is the directory where Spark SQL persists tables whereas a Hive metastore (aka metastore_db) is a relational database to manage the metadata of the persistent relational entities, e.g. databases, tables, columns, partitions.
By default, Spark SQL uses the embedded deployment mode of a Hive metastore with a Apache Derby database.
Important
|
The default embedded deployment mode is not recommended for production use due to limitation of only one active SparkSession at a time. Read Cloudera’s Configuring the Hive Metastore for CDH document that explains the available deployment modes of a Hive metastore. |
When SparkSession
is created with Hive support the external catalog (aka metastore) is HiveExternalCatalog. HiveExternalCatalog
uses spark.sql.warehouse.dir directory for the location of the databases and javax.jdo.option properties for the connection to the Hive metastore database.
Note
|
The metadata of relational entities is persisted in a metastore database over JDBC and DataNucleus AccessPlatform that uses javax.jdo.option properties. Read Hive Metastore Administration to learn how to manage a Hive Metastore. |
Name | Description | ||
---|---|---|---|
The JDBC connection URL of a Hive metastore database to use
|
|||
The JDBC driver of a Hive metastore database to use
|
|||
The user name to use to connect to a Hive metastore database |
|||
The password to use to connect to a Hive metastore database |
You can configure javax.jdo.option properties in hive-site.xml or using options with spark.hadoop prefix.
You can access the current connection properties for a Hive metastore in a Spark SQL application using the Spark internal classes.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
scala> :type spark org.apache.spark.sql.SparkSession scala> spark.sharedState.externalCatalog res1: org.apache.spark.sql.catalyst.catalog.ExternalCatalog = org.apache.spark.sql.hive.HiveExternalCatalog@79dd79eb // Use `:paste -raw` to paste the following code // This is to pass the private[spark] "gate" // BEGIN package org.apache.spark import org.apache.spark.sql.SparkSession object jacek { def open(spark: SparkSession) = { import org.apache.spark.sql.hive.HiveExternalCatalog spark.sharedState.externalCatalog.asInstanceOf[HiveExternalCatalog].client } } // END import org.apache.spark.jacek val hiveClient = jacek.open(spark) scala> hiveClient.getConf("javax.jdo.option.ConnectionURL", "") res2: String = jdbc:derby:;databaseName=metastore_db;create=true |
The benefits of using an external Hive metastore:
-
Allow multiple Spark applications (sessions) to access it concurrently
-
Allow a single Spark application to use table statistics without running “ANALYZE TABLE” every execution
Note
|
As of Spark 2.2 (see SPARK-18112 Spark2.x does not support read data from Hive 2.x metastore) Spark SQL supports reading data from Hive 2.1.1 metastore. |
Caution
|
FIXME Describe hive-site.xml vs config method vs --conf with spark.hadoop prefix.
|
Spark SQL uses the Hive-specific configuration properties that further fine-tune the Hive integration, e.g. spark.sql.hive.metastore.version or spark.sql.hive.metastore.jars.
spark.sql.warehouse.dir
Configuration Property
spark.sql.warehouse.dir is a static configuration property that sets Hive’s hive.metastore.warehouse.dir
property, i.e. the location of the Hive local/embedded metastore database (using Derby).
Tip
|
Refer to SharedState to learn about (the low-level details of) Spark SQL support for Apache Hive. See also the official Hive Metastore Administration document. |
Hive Metastore Deployment Modes
Configuring External Hive Metastore in Spark SQL
In order to use an external Hive metastore you should do the following:
-
Enable Hive support in SparkSession (that makes sure that the Hive classes are on CLASSPATH and sets spark.sql.catalogImplementation internal configuration property to
hive
) -
spark.sql.warehouse.dir required?
-
Define hive.metastore.warehouse.dir in hive-site.xml configuration resource
-
Check out warehousePath
-
Execute
./bin/run-example sql.hive.SparkHiveExample
to verify Hive configuration
When not configured by the hive-site.xml, SparkSession
automatically creates metastore_db
in the current directory and creates a directory configured by spark.sql.warehouse.dir, which defaults to the directory spark-warehouse
in the current directory that the Spark application is started.
Note
|
You may need to grant write privilege to the user who starts the Spark application. |
Hadoop Configuration Properties for Hive
Name | Description | ||
---|---|---|---|
The Thrift URI of a remote Hive metastore, i.e. one that is in a separate JVM process or on a remote node
|
|||
|
|||
Set to |
You may also want to use the following Hive configuration properties that (seem to) cause exceptions with an empty metastore database as of Hive 2.1.
-
datanucleus.schema.autoCreateAll
set totrue
spark.hadoop Configuration Properties
Caution
|
FIXME Describe the purpose of spark.hadoop.* properties
|
You can specify any of the Hadoop configuration properties, e.g. hive.metastore.warehouse.dir with spark.hadoop prefix.
1 2 3 4 5 6 7 8 9 10 |
$ spark-shell --conf spark.hadoop.hive.metastore.warehouse.dir=/tmp/hive-warehouse ... scala> spark.sharedState 18/01/08 10:46:19 INFO SharedState: spark.sql.warehouse.dir is not set, but hive.metastore.warehouse.dir is set. Setting spark.sql.warehouse.dir to the value of hive.metastore.warehouse.dir ('/tmp/hive-warehouse'). 18/01/08 10:46:19 INFO SharedState: Warehouse path is '/tmp/hive-warehouse'. res1: org.apache.spark.sql.internal.SharedState = org.apache.spark.sql.internal.SharedState@5a69b3cf |
hive-site.xml Configuration Resource
hive-site.xml
configures Hive clients (e.g. Spark SQL) with the Hive Metastore configuration.
hive-site.xml
is loaded when SharedState is created (which is…FIXME).
Configuration of Hive is done by placing your hive-site.xml
, core-site.xml
(for security configuration),
and hdfs-site.xml
(for HDFS configuration) file in conf/
(that is automatically added to the CLASSPATH of a Spark application).
Tip
|
You can use --driver-class-path or spark.driver.extraClassPath to point to the directory with configuration resources, e.g. hive-site.xml .
|
1 2 3 4 5 6 7 8 9 10 11 |
<configuration> <property> <name>hive.metastore.warehouse.dir</name> <value>/tmp/hive-warehouse</value> <description>Hive Metastore location</description> </property> </configuration> |
Tip
|
Read Resources section in Hadoop’s Configuration javadoc to learn more about configuration resources. |
Tip
|
Use
Enable
|
Starting Hive
The following steps are for Hive and Hadoop 2.7.5.
1 2 3 4 5 6 7 8 9 10 11 |
$ ./bin/hdfs version Hadoop 2.7.5 Subversion https://shv@git-wip-us.apache.org/repos/asf/hadoop.git -r 18065c2b6806ed4aa6a3187d77cbe21bb3dba075 Compiled by kshvachk on 2017-12-16T01:06Z Compiled with protoc 2.5.0 From source with checksum 9f118f95f47043332d51891e37f736e9 This command was run using /Users/jacek/dev/apps/hadoop-2.7.5/share/hadoop/common/hadoop-common-2.7.5.jar |
Tip
|
Read the section Pseudo-Distributed Operation about how to run Hadoop HDFS “on a single-node in a pseudo-distributed mode where each Hadoop daemon runs in a separate Java process.” |
Tip
|
Use
Use
|
-
Edit
etc/hadoop/core-site.xml
to add the following:12345678910<configuration><property><name>fs.defaultFS</name><value>hdfs://localhost:9000</value></property></configuration> -
./bin/hdfs namenode -format
right after you’ve installed Hadoop and before starting any HDFS services (NameNode in particular)123456789101112131415161718192021$ ./bin/hdfs namenode -format18/01/09 15:48:28 INFO namenode.NameNode: STARTUP_MSG:/************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_MSG: host = japila.local/192.168.1.2STARTUP_MSG: args = [-format]STARTUP_MSG: version = 2.7.5...18/01/09 15:48:28 INFO namenode.NameNode: createNameNode [-format]...Formatting using clusterid: CID-bfdc81da-6941-4a93-8371-2c254d503a97...18/01/09 15:48:29 INFO common.Storage: Storage directory /tmp/hadoop-jacek/dfs/name has been successfully formatted.18/01/09 15:48:29 INFO namenode.FSImageFormatProtobuf: Saving image file /tmp/hadoop-jacek/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression18/01/09 15:48:29 INFO namenode.FSImageFormatProtobuf: Image file /tmp/hadoop-jacek/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 322 bytes saved in 0 seconds.18/01/09 15:48:29 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 018/01/09 15:48:29 INFO util.ExitUtil: Exiting with status 0NoteUse
./bin/hdfs namenode
to start a NameNode that will tell you that the local filesystem is not ready.12345678910111213141516171819202122232425262728293031323334353637383940414243$ ./bin/hdfs namenode18/01/09 15:43:11 INFO namenode.NameNode: STARTUP_MSG:/************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_MSG: host = japila.local/192.168.1.2STARTUP_MSG: args = []STARTUP_MSG: version = 2.7.5...18/01/09 15:43:11 INFO namenode.NameNode: fs.defaultFS is hdfs://localhost:900018/01/09 15:43:11 INFO namenode.NameNode: Clients are to use localhost:9000 to access this namenode/service....18/01/09 15:43:12 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070...18/01/09 15:43:13 WARN common.Storage: Storage directory /private/tmp/hadoop-jacek/dfs/name does not exist18/01/09 15:43:13 WARN namenode.FSNamesystem: Encountered exception loading fsimageorg.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /private/tmp/hadoop-jacek/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:382)at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:233)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:984)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:686)at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:586)at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:646)at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:820)at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:804)at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1516)at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1582)...18/01/09 15:43:13 ERROR namenode.NameNode: Failed to start namenode.org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /private/tmp/hadoop-jacek/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:382)at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:233)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:984)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:686)at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:586)at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:646)at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:820)at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:804)at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1516)at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1582) -
Start Hadoop HDFS using
./sbin/start-dfs.sh
(andtail -f logs/hadoop-*-datanode-*.log
)12345678910$ ./sbin/start-dfs.shStarting namenodes on [localhost]localhost: starting namenode, logging to /Users/jacek/dev/apps/hadoop-2.7.5/logs/hadoop-jacek-namenode-japila.local.outlocalhost: starting datanode, logging to /Users/jacek/dev/apps/hadoop-2.7.5/logs/hadoop-jacek-datanode-japila.local.outStarting secondary namenodes [0.0.0.0]0.0.0.0: starting secondarynamenode, logging to /Users/jacek/dev/apps/hadoop-2.7.5/logs/hadoop-jacek-secondarynamenode-japila.local.out -
Use
jps -lm
to list Hadoop’s JVM processes.12345678$ jps -lm26576 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode26468 org.apache.hadoop.hdfs.server.datanode.DataNode26381 org.apache.hadoop.hdfs.server.namenode.NameNode -
Create
hive-site.xml
in$SPARK_HOME/conf
with the following:123456789101112<?xml version="1.0"?><configuration><property><name>hive.metastore.warehouse.dir</name><value>hdfs://localhost:9000/jacek/hive_warehouse</value><description>Warehouse Location</description></property></configuration>