Uploaded image for project: 'Hive'
  1. Hive
  2. HIVE-16923 Hive-on-Spark DPP Improvements
  3. HIVE-17219

NPE in SparkPartitionPruningSinkOperator#closeOp for query with partitioned join in subquery

    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 3.0.0
    • None
    • Spark
    • None

    Description

      The following query: select * from partitioned_table1 where partitioned_table1.part_col in (select partitioned_table2.col from partitioned_table2 join partitioned_table3 on partitioned_table3.col = partitioned_table2.part_col) throws a NPE in SparkPartitionPruningSinkOperator#closeOp

      The full stack trace is:

      Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 22.0 failed 4 times, most recent failure: Lost task 1.3 in stage 22.0 (TID 37, 10.16.1.179): java.lang.IllegalStateException: Hit error while closing operators - failing tree: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NullPointerException
              at org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.close(SparkMapRecordHandler.java:194)
              at org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.closeRecordProcessor(HiveMapFunctionResultList.java:58)
              at org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.hasNext(HiveBaseFunctionResultList.java:96)
              at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42)
              at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:147)
              at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
              at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
              at org.apache.spark.scheduler.Task.run(Task.scala:85)
              at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
              at java.lang.Thread.run(Thread.java:745)
      Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NullPointerException
              at org.apache.hadoop.hive.ql.parse.spark.SparkPartitionPruningSinkOperator.closeOp(SparkPartitionPruningSinkOperator.java:95)
              at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:709)
              at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:723)
              at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:723)
              at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:723)
              at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:723)
              at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:723)
              at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:723)
              at org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.close(SparkMapRecordHandler.java:171)
              ... 11 more
      Caused by: java.lang.NullPointerException
              at org.apache.hadoop.hive.ql.parse.spark.SparkPartitionPruningSinkOperator.flushToFile(SparkPartitionPruningSinkOperator.java:151)
              at org.apache.hadoop.hive.ql.parse.spark.SparkPartitionPruningSinkOperator.closeOp(SparkPartitionPruningSinkOperator.java:93)
              ... 19 more
      

      The full setup is:

      set hive.spark.dynamic.partition.pruning=true;
      
      create table partitioned_table1 (col int) partitioned by (part_col int);
      create table partitioned_table2 (col int) partitioned by (part_col int);
      create table partitioned_table3 (col int) partitioned by (part_col int);
      create table regular_table (col1 int, col2 int);
      
      insert into table regular_table values (0, 0), (1, 1), (2, 2);
      
      alter table partitioned_table1 add partition (part_col = 1);
      alter table partitioned_table1 add partition (part_col = 2);
      alter table partitioned_table1 add partition (part_col = 3);
      
      insert into table partitioned_table1 partition (part_col = 1) values (1), (2), (3);
      insert into table partitioned_table1 partition (part_col = 2) values (1), (2), (3);
      insert into table partitioned_table1 partition (part_col = 3) values (1), (2), (3);
      
      alter table partitioned_table2 add partition (part_col = 1);
      alter table partitioned_table2 add partition (part_col = 2);
      alter table partitioned_table2 add partition (part_col = 3);
      
      insert into table partitioned_table2 partition (part_col = 1) values (1), (2), (3);
      insert into table partitioned_table2 partition (part_col = 2) values (1), (2), (3);
      insert into table partitioned_table2 partition (part_col = 3) values (1), (2), (3);
      
      alter table partitioned_table3 add partition (part_col = 1);
      alter table partitioned_table3 add partition (part_col = 2);
      alter table partitioned_table3 add partition (part_col = 3);
      
      insert into table partitioned_table3 partition (part_col = 1) values (1), (2), (3);
      insert into table partitioned_table3 partition (part_col = 2) values (1), (2), (3);
      insert into table partitioned_table3 partition (part_col = 3) values (1), (2), (3);
      
      select * from partitioned_table1 where partitioned_table1.part_col in (select partitioned_table2.col from partitioned_table2 join partitioned_table3 on partitioned_table3.col = partitioned_table2.part_col);
      

      Doesn't seem to happen when map-joins are enabled.

      Attachments

        Activity

          People

            stakiar Sahil Takiar
            stakiar Sahil Takiar
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated: