StaticSQLConf — Cross-Session, Immutable and Static SQL Configuration
StaticSQLConf
holds cross-session, immutable and static SQL configuration properties.
Name | Default Value | Scala Value | Description | ||
---|---|---|---|---|---|
|
|
|
Selects the active catalog implementation from the available ExternalCatalogs:
Used when:
|
||
|
|
|
(internal) Only used for internal debugging when Not all functions are supported when enabled. |
||
|
|
|
Name of the SQL extension configuration class that is used to configure |
||
|
|
|
(internal) The maximum size of the cache that maps qualified table names to table relation plans. Must not be negative. |
||
|
|
|
Used exclusively to create a GlobalTempViewManager when
|
||
|
|
|
When set to true, Hive Thrift server is running in a single session mode. All the JDBC/ODBC connections share the temporary views, function registries, SQL configuration and the current database. |
||
|
|
|
List of class names that implement QueryExecutionListener that will be automatically registered to new The classes should have either a no-arg constructor, or a constructor that expects a |
||
|
|
|
(internal) The maximum length allowed in a single cell when storing additional schema information in Hive’s metastore |
||
|
|
|
|||
|
|
|
The directory of a Hive warehouse (using Derby) with managed databases and tables (aka Spark warehouse)
|
The properties in StaticSQLConf
can only be queried and can never be changed once the first SparkSession
is created.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
import org.apache.spark.sql.internal.StaticSQLConf scala> val metastoreName = spark.conf.get(StaticSQLConf.CATALOG_IMPLEMENTATION.key) metastoreName: String = hive scala> spark.conf.set(StaticSQLConf.CATALOG_IMPLEMENTATION.key, "hive") org.apache.spark.sql.AnalysisException: Cannot modify the value of a static config: spark.sql.catalogImplementation; at org.apache.spark.sql.RuntimeConfig.requireNonStaticConf(RuntimeConfig.scala:144) at org.apache.spark.sql.RuntimeConfig.set(RuntimeConfig.scala:41) ... 50 elided |