关注 spark技术分享,
撸spark源码 玩spark最佳实践

Hive Metastore

Hive Metastore

Spark SQL uses a Hive metastore to manage the metadata of persistent relational entities (e.g. databases, tables, columns, partitions) in a relational database (for fast access).

A Hive metastore warehouse (aka spark-warehouse) is the directory where Spark SQL persists tables whereas a Hive metastore (aka metastore_db) is a relational database to manage the metadata of the persistent relational entities, e.g. databases, tables, columns, partitions.

By default, Spark SQL uses the embedded deployment mode of a Hive metastore with a Apache Derby database.

Important

The default embedded deployment mode is not recommended for production use due to limitation of only one active SparkSession at a time.

Read Cloudera’s Configuring the Hive Metastore for CDH document that explains the available deployment modes of a Hive metastore.

When SparkSession is created with Hive support the external catalog (aka metastore) is HiveExternalCatalog. HiveExternalCatalog uses spark.sql.warehouse.dir directory for the location of the databases and javax.jdo.option properties for the connection to the Hive metastore database.

Note

The metadata of relational entities is persisted in a metastore database over JDBC and DataNucleus AccessPlatform that uses javax.jdo.option properties.

Read Hive Metastore Administration to learn how to manage a Hive Metastore.

Table 1. Hive Metastore Database Connection Properties
Name Description

javax.jdo.option.ConnectionURL

The JDBC connection URL of a Hive metastore database to use

javax.jdo.option.ConnectionDriverName

The JDBC driver of a Hive metastore database to use

javax.jdo.option.ConnectionUserName

The user name to use to connect to a Hive metastore database

javax.jdo.option.ConnectionPassword

The password to use to connect to a Hive metastore database

You can configure javax.jdo.option properties in hive-site.xml or using options with spark.hadoop prefix.

You can access the current connection properties for a Hive metastore in a Spark SQL application using the Spark internal classes.

The benefits of using an external Hive metastore:

  1. Allow multiple Spark applications (sessions) to access it concurrently

  2. Allow a single Spark application to use table statistics without running “ANALYZE TABLE” every execution

Note
As of Spark 2.2 (see SPARK-18112 Spark2.x does not support read data from Hive 2.x metastore) Spark SQL supports reading data from Hive 2.1.1 metastore.
Caution
FIXME Describe hive-site.xml vs config method vs --conf with spark.hadoop prefix.

Spark SQL uses the Hive-specific configuration properties that further fine-tune the Hive integration, e.g. spark.sql.hive.metastore.version or spark.sql.hive.metastore.jars.

spark.sql.warehouse.dir Configuration Property

spark.sql.warehouse.dir is a static configuration property that sets Hive’s hive.metastore.warehouse.dir property, i.e. the location of the Hive local/embedded metastore database (using Derby).

Tip

Refer to SharedState to learn about (the low-level details of) Spark SQL support for Apache Hive.

See also the official Hive Metastore Administration document.

Hive Metastore Deployment Modes

Configuring External Hive Metastore in Spark SQL

In order to use an external Hive metastore you should do the following:

  1. Enable Hive support in SparkSession (that makes sure that the Hive classes are on CLASSPATH and sets spark.sql.catalogImplementation internal configuration property to hive)

  2. spark.sql.warehouse.dir required?

  3. Define hive.metastore.warehouse.dir in hive-site.xml configuration resource

  4. Check out warehousePath

  5. Execute ./bin/run-example sql.hive.SparkHiveExample to verify Hive configuration

When not configured by the hive-site.xml, SparkSession automatically creates metastore_db in the current directory and creates a directory configured by spark.sql.warehouse.dir, which defaults to the directory spark-warehouse in the current directory that the Spark application is started.

Note

hive.metastore.warehouse.dir property in hive-site.xml is deprecated since Spark 2.0.0. Use spark.sql.warehouse.dir to specify the default location of the databases in a Hive warehouse.

You may need to grant write privilege to the user who starts the Spark application.

Hadoop Configuration Properties for Hive

Table 2. Hadoop Configuration Properties for Hive
Name Description

hive.metastore.uris

The Thrift URI of a remote Hive metastore, i.e. one that is in a separate JVM process or on a remote node

hive.metastore.warehouse.dir

SharedState uses hive.metastore.warehouse.dir to set spark.sql.warehouse.dir if the latter is undefined.

Caution
FIXME How is hive.metastore.warehouse.dir related to spark.sql.warehouse.dir? SharedState.warehousePath? Review https://github.com/apache/spark/pull/16996/files

hive.metastore.schema.verification

Set to false (as seems to cause exceptions with an empty metastore database as of Hive 2.1)

You may also want to use the following Hive configuration properties that (seem to) cause exceptions with an empty metastore database as of Hive 2.1.

  • datanucleus.schema.autoCreateAll set to true

spark.hadoop Configuration Properties

Caution
FIXME Describe the purpose of spark.hadoop.* properties

You can specify any of the Hadoop configuration properties, e.g. hive.metastore.warehouse.dir with spark.hadoop prefix.

hive-site.xml Configuration Resource

hive-site.xml configures Hive clients (e.g. Spark SQL) with the Hive Metastore configuration.

hive-site.xml is loaded when SharedState is created (which is…​FIXME).

Configuration of Hive is done by placing your hive-site.xml, core-site.xml (for security configuration),
and hdfs-site.xml (for HDFS configuration) file in conf/ (that is automatically added to the CLASSPATH of a Spark application).

Tip
You can use --driver-class-path or spark.driver.extraClassPath to point to the directory with configuration resources, e.g. hive-site.xml.

Tip
Read Resources section in Hadoop’s Configuration javadoc to learn more about configuration resources.
Tip

Use SparkContext.hadoopConfiguration to know which configuration resources have already been registered.

Enable org.apache.spark.sql.internal.SharedState logger to INFO logging level to know where hive-site.xml comes from.

Starting Hive

The following steps are for Hive and Hadoop 2.7.5.

Tip
Read the section Pseudo-Distributed Operation about how to run Hadoop HDFS “on a single-node in a pseudo-distributed mode where each Hadoop daemon runs in a separate Java process.”
Tip

Use hadoop.tmp.dir configuration property as the base for temporary directories.

Use ./bin/hdfs getconf -confKey hadoop.tmp.dir to check out the value

  1. Edit etc/hadoop/core-site.xml to add the following:

  2. ./bin/hdfs namenode -format right after you’ve installed Hadoop and before starting any HDFS services (NameNode in particular)

    Note

    Use ./bin/hdfs namenode to start a NameNode that will tell you that the local filesystem is not ready.

  3. Start Hadoop HDFS using ./sbin/start-dfs.sh (and tail -f logs/hadoop-*-datanode-*.log)

  4. Use jps -lm to list Hadoop’s JVM processes.

  5. Create hive-site.xml in $SPARK_HOME/conf with the following:

赞(0) 打赏
未经允许不得转载:spark技术分享 » Hive Metastore
分享到: 更多 (0)

关注公众号:spark技术分享

联系我们联系我们

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

微信扫一扫打赏