Uploaded image for project: 'Hive'
  1. Hive
  2. HIVE-23909

ClassCastException: org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to org.apache.hadoop.hive.ql.exec.vector.DecimalColumnVector

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Minor
    • Resolution: Unresolved
    • 3.1.2
    • None
    • Vectorization
    • None

    Description

      Query "select ... order by nvl(<decimal column>, 0)" fails with ClassCastException: org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to org.apache.hadoop.hive.ql.exec.vector.DecimalColumnVector".

       

      The query works fine

      • if I replace the constant 0 with 0.0, or
      • if I change the column to int, or
      • if I set hive.vectorized.execution.enabled to false

      The query fails in CDH 7.0.3 (Hive 3.1.2), but works fine in HDP 3.0.1 (Hive 3.1.0).

        

      Reproducer
      create external table foo (a decimal(10,5) ) location 'file:/tmp/foo';
      INSERT INTO TABLE foo values (1), (NULL), (2);
      
      set hive.vectorized.execution.enabled = true;
      select * from foo order by nvl(a,0); 
      

       

      Error message in CLI
      20/07/23 05:21:54 [HiveServer2-Background-Pool: Thread-80]: ERROR status.SparkJobMonitor: Job failed with java.lang.ClassCastException: org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to org.apache.hadoop.hive.ql.exec.vector.DecimalColumnVector
      java.util.concurrent.ExecutionException: Exception thrown by job
              at org.apache.spark.JavaFutureActionWrapper.getImpl(FutureAction.scala:337)
              at org.apache.spark.JavaFutureActionWrapper.get(FutureAction.scala:342)
              at org.apache.hive.spark.client.RemoteDriver$JobWrapper.call(RemoteDriver.java:382)
              at org.apache.hive.spark.client.RemoteDriver$JobWrapper.call(RemoteDriver.java:343)
              at java.util.concurrent.FutureTask.run(FutureTask.java:266)
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
              at java.lang.Thread.run(Thread.java:748)
      Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 8.0 failed 4 times, most recent failure: Lost task 0.3 in stage 8.0 (TID 15, den01eda.us.oracle.com, executor 2): java.lang.IllegalStateException: Hit error while closing operators - failing tree: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating a
              at org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.close(SparkMapRecordHandler.java:203)
              at org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.closeRecordProcessor(HiveMapFunctionResultList.java:58)
              at org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.hasNext(HiveBaseFunctionResultList.java:96)
              at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42)
              at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
              at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
              at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
              at org.apache.spark.scheduler.Task.run(Task.scala:123)
              at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
              at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1315)
              at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
              at java.lang.Thread.run(Thread.java:748)
      Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating a
              at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:149)
              at org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:969)
              at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:126)
              at org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.closeOp(VectorMapOperator.java:987)
              at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:732)
              at org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.close(SparkMapRecordHandler.java:180)
              ... 13 more
      Caused by: java.lang.ClassCastException: org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to org.apache.hadoop.hive.ql.exec.vector.DecimalColumnVector
              at org.apache.hadoop.hive.ql.exec.vector.DecimalColumnVector.setElement(DecimalColumnVector.java:130)
              at org.apache.hadoop.hive.ql.exec.vector.expressions.VectorCoalesce.evaluate(VectorCoalesce.java:124)
              at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:146)
              ... 18 more
      
      Driver stacktrace:
              at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1889)
              at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1877)
              at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1876)
              at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
              at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
              at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1876)
              at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
              at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
              at scala.Option.foreach(Option.scala:257)
              at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
              at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2110)
              at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2059)
              at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2048)
              at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
      Caused by: java.lang.IllegalStateException: Hit error while closing operators - failing tree: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating a
              at org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.close(SparkMapRecordHandler.java:203)
              at org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.closeRecordProcessor(HiveMapFunctionResultList.java:58)
              at org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.hasNext(HiveBaseFunctionResultList.java:96)
              at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42)
              at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
              at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
              at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
              at org.apache.spark.scheduler.Task.run(Task.scala:123)
              at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
              at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1315)
              at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
              at java.lang.Thread.run(Thread.java:748)
      Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating a
              at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:149)
              at org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:969)
              at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:126)
              at org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.closeOp(VectorMapOperator.java:987)
              at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:732)
              at org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.close(SparkMapRecordHandler.java:180)
              ... 13 more
      Caused by: java.lang.ClassCastException: org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to org.apache.hadoop.hive.ql.exec.vector.DecimalColumnVector
              at org.apache.hadoop.hive.ql.exec.vector.DecimalColumnVector.setElement(DecimalColumnVector.java:130)
              at org.apache.hadoop.hive.ql.exec.vector.expressions.VectorCoalesce.evaluate(VectorCoalesce.java:124)
              at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:146)
              ... 18 more
      
      FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed during runtime. Please check stacktrace for the root cause.
      20/07/23 05:21:54 [HiveServer2-Background-Pool: Thread-80]: ERROR ql.Driver: FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed during runtime. Please check stacktrace for the root cause.
      20/07/23 05:21:54 [HiveServer2-Background-Pool: Thread-80]: ERROR operation.Operation: Error running hive query: 
      org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed during runtime. Please check stacktrace for the root cause.
              at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:352) ~[hive-service-3.1.2000.7.0.3.0-79.jar:3.1.2000.7.0.3.0-79]
              at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:231) ~[hive-service-3.1.2000.7.0.3.0-79.jar:3.1.2000.7.0.3.0-79]
              at org.apache.hive.service.cli.operation.SQLOperation.access$600(SQLOperation.java:87) ~[hive-service-3.1.2000.7.0.3.0-79.jar:3.1.2000.7.0.3.0-79]
              at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:326) [hive-service-3.1.2000.7.0.3.0-79.jar:3.1.2000.7.0.3.0-79]
              at java.security.AccessController.doPrivileged(Native Method) [?:1.8.0_181]
              at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_181]
              at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876) [hadoop-common-3.1.1.7.0.3.0-79.jar:?]
              at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:344) [hive-service-3.1.2000.7.0.3.0-79.jar:3.1.2000.7.0.3.0-79]
              at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_181]
              at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_181]
              at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_181]
              at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_181]
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
              at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
      Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Spark job failed during runtime. Please check stacktrace for the root cause.
              at org.apache.hadoop.hive.ql.exec.spark.SparkTask.getSparkJobInfo(SparkTask.java:498) ~[hive-exec-3.1.2000.7.0.3.0-79.jar:3.1.2000.7.0.3.0-79]
              at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:156) ~[hive-exec-3.1.2000.7.0.3.0-79.jar:3.1.2000.7.0.3.0-79]
              at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) ~[hive-exec-3.1.2000.7.0.3.0-79.jar:3.1.2000.7.0.3.0-79]
              at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) ~[hive-exec-3.1.2000.7.0.3.0-79.jar:3.1.2000.7.0.3.0-79]
              at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2302) ~[hive-exec-3.1.2000.7.0.3.0-79.jar:3.1.2000.7.0.3.0-79]
              at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1954) ~[hive-exec-3.1.2000.7.0.3.0-79.jar:3.1.2000.7.0.3.0-79]
              at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1627) ~[hive-exec-3.1.2000.7.0.3.0-79.jar:3.1.2000.7.0.3.0-79]
              at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1387) ~[hive-exec-3.1.2000.7.0.3.0-79.jar:3.1.2000.7.0.3.0-79]
              at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1381) ~[hive-exec-3.1.2000.7.0.3.0-79.jar:3.1.2000.7.0.3.0-79]
              at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) ~[hive-exec-3.1.2000.7.0.3.0-79.jar:3.1.2000.7.0.3.0-79]
              at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:229) ~[hive-service-3.1.2000.7.0.3.0-79.jar:3.1.2000.7.0.3.0-79]
              ... 13 more
      Caused by: org.apache.spark.SparkException: java.util.concurrent.ExecutionException: Exception thrown by job
              at org.apache.spark.JavaFutureActionWrapper.getImpl(FutureAction.scala:337)
              at org.apache.spark.JavaFutureActionWrapper.get(FutureAction.scala:342)
              at org.apache.hive.spark.client.RemoteDriver$JobWrapper.call(RemoteDriver.java:382)
              at org.apache.hive.spark.client.RemoteDriver$JobWrapper.call(RemoteDriver.java:343)
              at java.util.concurrent.FutureTask.run(FutureTask.java:266)
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
              at java.lang.Thread.run(Thread.java:748)
      Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 8.0 failed 4 times, most recent failure: Lost task 0.3 in stage 8.0 (TID 15, den01eda.us.oracle.com, executor 2): java.lang.IllegalStateException: Hit error while closing operators - failing tree: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating a
              at org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.close(SparkMapRecordHandler.java:203)
              at org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.closeRecordProcessor(HiveMapFunctionResultList.java:58)
              at org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.hasNext(HiveBaseFunctionResultList.java:96)
              at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42)
              at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
              at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
              at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
              at org.apache.spark.scheduler.Task.run(Task.scala:123)
              at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
              at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1315)
              at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
              at java.lang.Thread.run(Thread.java:748)
      Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating a
              at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:149)
              at org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:969)
              at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:126)
              at org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.closeOp(VectorMapOperator.java:987)
              at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:732)
              at org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.close(SparkMapRecordHandler.java:180)
              ... 13 more
      Caused by: java.lang.ClassCastException: org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to org.apache.hadoop.hive.ql.exec.vector.DecimalColumnVector
              at org.apache.hadoop.hive.ql.exec.vector.DecimalColumnVector.setElement(DecimalColumnVector.java:130)
              at org.apache.hadoop.hive.ql.exec.vector.expressions.VectorCoalesce.evaluate(VectorCoalesce.java:124)
              at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:146)
              ... 18 more
      
      Driver stacktrace:
              at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1889)
              at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1877)
              at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1876)
              at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
              at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
              at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1876)
              at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
              at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
              at scala.Option.foreach(Option.scala:257)
              at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
              at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2110)
              at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2059)
              at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2048)
              at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
      Caused by: java.lang.IllegalStateException: Hit error while closing operators - failing tree: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating a
              at org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.close(SparkMapRecordHandler.java:203)
              at org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.closeRecordProcessor(HiveMapFunctionResultList.java:58)
              at org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.hasNext(HiveBaseFunctionResultList.java:96)
              at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42)
              at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
              at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
              at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
              at org.apache.spark.scheduler.Task.run(Task.scala:123)
              at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
              at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1315)
              at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
              at java.lang.Thread.run(Thread.java:748)
      Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating a
              at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:149)
              at org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:969)
              at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:126)
              at org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.closeOp(VectorMapOperator.java:987)
              at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:732)
              at org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.close(SparkMapRecordHandler.java:180)
              ... 13 more
      Caused by: java.lang.ClassCastException: org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to org.apache.hadoop.hive.ql.exec.vector.DecimalColumnVector
              at org.apache.hadoop.hive.ql.exec.vector.DecimalColumnVector.setElement(DecimalColumnVector.java:130)
              at org.apache.hadoop.hive.ql.exec.vector.expressions.VectorCoalesce.evaluate(VectorCoalesce.java:124)
              at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:146)
              ... 18 more
      
              at org.apache.hive.spark.client.SparkClientImpl$ClientProtocol.handle(SparkClientImpl.java:595) ~[hive-exec-3.1.2000.7.0.3.0-79.jar:3.1.2000.7.0.3.0-79]
              at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_181]
              at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_181]
              at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181]
              at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181]
              at org.apache.hive.spark.client.rpc.RpcDispatcher.handleCall(RpcDispatcher.java:121) ~[hive-exec-3.1.2000.7.0.3.0-79.jar:3.1.2000.7.0.3.0-79]
              at org.apache.hive.spark.client.rpc.RpcDispatcher.channelRead0(RpcDispatcher.java:80) ~[hive-exec-3.1.2000.7.0.3.0-79.jar:3.1.2000.7.0.3.0-79]
              at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
              at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
              at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
              at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
              at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
              at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
              at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
              at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
              at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
              at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
              at io.netty.handler.codec.ByteToMessageCodec.channelRead(ByteToMessageCodec.java:103) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
              at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
              at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
              at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
              at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
              at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
              at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
              at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
              at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
              at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
              at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
              at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
              at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
              at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) ~[netty-all-4.1.17.Final.jar:4.1.17.Final]
              ... 1 more
      ERROR : Job failed with java.lang.ClassCastException: org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to org.apache.hadoop.hive.ql.exec.vector.DecimalColumnVector
      java.util.concurrent.ExecutionException: Exception thrown by job
              at org.apache.spark.JavaFutureActionWrapper.getImpl(FutureAction.scala:337)
              at org.apache.spark.JavaFutureActionWrapper.get(FutureAction.scala:342)
              at org.apache.hive.spark.client.RemoteDriver$JobWrapper.call(RemoteDriver.java:382)
              at org.apache.hive.spark.client.RemoteDriver$JobWrapper.call(RemoteDriver.java:343)
              at java.util.concurrent.FutureTask.run(FutureTask.java:266)
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
              at java.lang.Thread.run(Thread.java:748)
      Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 8.0 failed 4 times, most recent failure: Lost task 0.3 in stage 8.0 (TID 15, den01eda.us.oracle.com, executor 2): java.lang.IllegalStateException: Hit error while closing operators - failing tree: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating a
              at org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.close(SparkMapRecordHandler.java:203)
              at org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.closeRecordProcessor(HiveMapFunctionResultList.java:58)
              at org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.hasNext(HiveBaseFunctionResultList.java:96)
              at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42)
              at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
              at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
              at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
              at org.apache.spark.scheduler.Task.run(Task.scala:123)
              at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
              at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1315)
              at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
              at java.lang.Thread.run(Thread.java:748)
      Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating a
              at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:149)
              at org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:969)
              at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:126)
              at org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.closeOp(VectorMapOperator.java:987)
              at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:732)
              at org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.close(SparkMapRecordHandler.java:180)
              ... 13 more
      Caused by: java.lang.ClassCastException: org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to org.apache.hadoop.hive.ql.exec.vector.DecimalColumnVector
              at org.apache.hadoop.hive.ql.exec.vector.DecimalColumnVector.setElement(DecimalColumnVector.java:130)
              at org.apache.hadoop.hive.ql.exec.vector.expressions.VectorCoalesce.evaluate(VectorCoalesce.java:124)
              at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:146)
              ... 18 more
      
      Driver stacktrace:
              at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1889)
              at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1877)
              at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1876)
              at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
              at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
              at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1876)
              at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
              at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
              at scala.Option.foreach(Option.scala:257)
              at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
              at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2110)
              at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2059)
              at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2048)
              at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
      Caused by: java.lang.IllegalStateException: Hit error while closing operators - failing tree: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating a
              at org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.close(SparkMapRecordHandler.java:203)
              at org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.closeRecordProcessor(HiveMapFunctionResultList.java:58)
              at org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.hasNext(HiveBaseFunctionResultList.java:96)
              at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42)
              at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
              at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
              at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
              at org.apache.spark.scheduler.Task.run(Task.scala:123)
              at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
              at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1315)
              at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
              at java.lang.Thread.run(Thread.java:748)
      Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating a
              at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:149)
              at org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:969)
              at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:126)
              at org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.closeOp(VectorMapOperator.java:987)
              at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:732)
              at org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.close(SparkMapRecordHandler.java:180)
              ... 13 more
      Caused by: java.lang.ClassCastException: org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to org.apache.hadoop.hive.ql.exec.vector.DecimalColumnVector
              at org.apache.hadoop.hive.ql.exec.vector.DecimalColumnVector.setElement(DecimalColumnVector.java:130)
              at org.apache.hadoop.hive.ql.exec.vector.expressions.VectorCoalesce.evaluate(VectorCoalesce.java:124)
              at org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:146)
              ... 18 more
      
      ERROR : FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed during runtime. Please check stacktrace for the root cause.
      Error: Error while processing statement: FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed during runtime. Please check stacktrace for the root cause. (state=42000,code=3)
      

      Attachments

        Activity

          People

            Unassigned Unassigned
            gabriel.balan Gabriel C Balan
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated: