Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-47197

Failed to connect HiveMetastore when using iceberg with HiveCatalog on spark-sql or spark-shell

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Not A Problem
    • 3.2.3, 3.5.1
    • None
    • Spark Shell, SQL

    Description

      I can't connect to kerberized HiveMetastore when using iceberg with HiveCatalog on spark-sql or spark-shell.

      I think this issue is caused by the fact that there is no way to get HIVE_DELEGATION_TOKEN when using spark-sql or spark-shell.

      (https://github.com/apache/spark/blob/v3.5.1/sql/hive/src/main/scala/org/apache/spark/sql/hive/security/HiveDelegationTokenProvider.scala#L78-L83)

       

          val currentToken = UserGroupInformation.getCurrentUser().getCredentials().getToken(tokenAlias)
          currentToken == null && UserGroupInformation.isSecurityEnabled &&
            hiveConf(hadoopConf).getTrimmed("hive.metastore.uris", "").nonEmpty &&
            (SparkHadoopUtil.get.isProxyUser(UserGroupInformation.getCurrentUser()) ||
              (!Utils.isClientMode(sparkConf) && !sparkConf.contains(KEYTAB))) 

      There should be a way to force to get HIVE_DELEGATION_TOKEN even when using spark-sql or spark-shell.

      Possible way is to get HIVE_DELEGATION_TOKEN if the configuration below is set?

      spark.security.credentials.hive.enabled   true 

       

      24/02/28 07:42:04 WARN TaskSetManager: Lost task 0.1 in stage 0.0 (TID 1) (machine1.example.com executor 2): org.apache.iceberg.hive.RuntimeMetaException: Failed to connect to Hive Metastore
      ...
      Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: GSS initiate failed 

       

       

      spark-sql> select * from temp.test_hive_catalog;
      ...
      ...
      24/02/28 07:42:04 WARN TaskSetManager: Lost task 0.1 in stage 0.0 (TID 1) (machine1.example.com executor 2): org.apache.iceberg.hive.RuntimeMetaException: Failed to connect to Hive Metastore
              at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:84)
              at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:34)
              at org.apache.iceberg.ClientPoolImpl.get(ClientPoolImpl.java:125)
              at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:56)
              at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:51)
              at org.apache.iceberg.hive.CachedClientPool.run(CachedClientPool.java:122)
              at org.apache.iceberg.hive.HiveTableOperations.doRefresh(HiveTableOperations.java:158)
              at org.apache.iceberg.BaseMetastoreTableOperations.refresh(BaseMetastoreTableOperations.java:97)
              at org.apache.iceberg.BaseMetastoreTableOperations.current(BaseMetastoreTableOperations.java:80)
              at org.apache.iceberg.BaseMetastoreCatalog.loadTable(BaseMetastoreCatalog.java:47)
              at org.apache.iceberg.mr.Catalogs.loadTable(Catalogs.java:124)
              at org.apache.iceberg.mr.Catalogs.loadTable(Catalogs.java:111)
              at org.apache.iceberg.mr.hive.HiveIcebergStorageHandler.overlayTableProperties(HiveIcebergStorageHandler.java:276)
              at org.apache.iceberg.mr.hive.HiveIcebergStorageHandler.configureInputJobProperties(HiveIcebergStorageHandler.java:86)
              at org.apache.spark.sql.hive.HiveTableUtil$.configureJobPropertiesForStorageHandler(TableReader.scala:426)
              at org.apache.spark.sql.hive.HadoopTableReader$.initializeLocalJobConfFunc(TableReader.scala:456)
              at org.apache.spark.sql.hive.HadoopTableReader.$anonfun$createOldHadoopRDD$1(TableReader.scala:342)
              at org.apache.spark.sql.hive.HadoopTableReader.$anonfun$createOldHadoopRDD$1$adapted(TableReader.scala:342)
              at org.apache.spark.rdd.HadoopRDD.$anonfun$getJobConf$8(HadoopRDD.scala:181)
              at org.apache.spark.rdd.HadoopRDD.$anonfun$getJobConf$8$adapted(HadoopRDD.scala:181)
              at scala.Option.foreach(Option.scala:407)
              at org.apache.spark.rdd.HadoopRDD.$anonfun$getJobConf$6(HadoopRDD.scala:181)
              at scala.Option.getOrElse(Option.scala:189)
              at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:178)
              at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:247)
              at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:243)
              at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:96)
              at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
              at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
              at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
              at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
              at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
              at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
              at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
              at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
              at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
              at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
              at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
              at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
              at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
              at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
              at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
              at org.apache.spark.scheduler.Task.run(Task.scala:131)
              at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
              at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1492)
              at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
              at java.lang.Thread.run(Thread.java:750)
      Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
              at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1742)
              at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:83)
              at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133)
              at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
              at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:97)
              at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
              at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
              at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
              at java.lang.reflect.Method.invoke(Method.java:498)
              at org.apache.iceberg.common.DynMethods$UnboundMethod.invokeChecked(DynMethods.java:60)
              at org.apache.iceberg.common.DynMethods$UnboundMethod.invoke(DynMethods.java:72)
              at org.apache.iceberg.common.DynMethods$StaticMethod.invoke(DynMethods.java:185)
              at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:63)
              ... 48 more
      Caused by: java.lang.reflect.InvocationTargetException
              at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
              at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
              at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
              at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
              at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1740)
              ... 60 more
      Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: GSS initiate failed
              at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
              at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:314)
              at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:38)
              at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
              at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
              at java.security.AccessController.doPrivileged(Native Method)
              at javax.security.auth.Subject.doAs(Subject.java:422)
              at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
              at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
              at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:478)
              at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:245)
              at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
              at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
              at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
              at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
              at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1740)
              at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:83)
              at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133)
              at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
              at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:97)
              at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
              at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
              at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
              at java.lang.reflect.Method.invoke(Method.java:498)
              at org.apache.iceberg.common.DynMethods$UnboundMethod.invokeChecked(DynMethods.java:60)
              at org.apache.iceberg.common.DynMethods$UnboundMethod.invoke(DynMethods.java:72)
              at org.apache.iceberg.common.DynMethods$StaticMethod.invoke(DynMethods.java:185)
              at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:63)
              at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:34)
              at org.apache.iceberg.ClientPoolImpl.get(ClientPoolImpl.java:125)
              at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:56)
              at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:51)
              at org.apache.iceberg.hive.CachedClientPool.run(CachedClientPool.java:122)
              at org.apache.iceberg.hive.HiveTableOperations.doRefresh(HiveTableOperations.java:158)
              at org.apache.iceberg.BaseMetastoreTableOperations.refresh(BaseMetastoreTableOperations.java:97)
              at org.apache.iceberg.BaseMetastoreTableOperations.current(BaseMetastoreTableOperations.java:80)
              at org.apache.iceberg.BaseMetastoreCatalog.loadTable(BaseMetastoreCatalog.java:47)
              at org.apache.iceberg.mr.Catalogs.loadTable(Catalogs.java:124)
              at org.apache.iceberg.mr.Catalogs.loadTable(Catalogs.java:111)
              at org.apache.iceberg.mr.hive.HiveIcebergStorageHandler.overlayTableProperties(HiveIcebergStorageHandler.java:276)
              at org.apache.iceberg.mr.hive.HiveIcebergStorageHandler.configureInputJobProperties(HiveIcebergStorageHandler.java:86)
              at org.apache.spark.sql.hive.HiveTableUtil$.configureJobPropertiesForStorageHandler(TableReader.scala:426)
              at org.apache.spark.sql.hive.HadoopTableReader$.initializeLocalJobConfFunc(TableReader.scala:456)
              at org.apache.spark.sql.hive.HadoopTableReader.$anonfun$createOldHadoopRDD$1(TableReader.scala:342)
              at org.apache.spark.sql.hive.HadoopTableReader.$anonfun$createOldHadoopRDD$1$adapted(TableReader.scala:342)
              at org.apache.spark.rdd.HadoopRDD.$anonfun$getJobConf$8(HadoopRDD.scala:181)
              at org.apache.spark.rdd.HadoopRDD.$anonfun$getJobConf$8$adapted(HadoopRDD.scala:181)
              at scala.Option.foreach(Option.scala:407)
              at org.apache.spark.rdd.HadoopRDD.$anonfun$getJobConf$6(HadoopRDD.scala:181)
              at scala.Option.getOrElse(Option.scala:189)
              at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:178)
              at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:247)
              at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:243)
              at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:96)
              at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
              at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
              at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
              at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
              at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
              at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
              at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
              at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
              at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
              at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
              at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
              at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
              at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
              at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
              at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
              at org.apache.spark.scheduler.Task.run(Task.scala:131)
              at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
              at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1492)
              at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
              at java.lang.Thread.run(Thread.java:750)
      )
              at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:527)
              at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:245)
              ... 65 more
      ...
      ...
      ...

       

       

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              eub YUBI LEE
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: