Uploaded image for project: 'TOREE'
  1. TOREE
  2. TOREE-424

ClassCastException on Dataset with case class

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Minor
    • Resolution: Unresolved
    • 0.2.0
    • None
    • Kernel
    • None
    • ppcle64

    Description

      When we tried to use Jupyter Notebook with Apache Toree kernel, we couldn't get this working for DataSet specially with "case class" as it throws ClassCastException as follows,

      {{Name: org.apache.spark.SparkException
      Message: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost): java.lang.ClassCastException: $line45.$read$$iw$$iw$DataPoint cannot be cast to $line45.$read$$iw$$iw$DataPoint
      at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
      at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
      at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
      at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:246)
      at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:240)
      at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
      at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
      at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
      at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
      at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
      at org.apache.spark.scheduler.Task.run(Task.scala:86)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      at java.lang.Thread.run(Thread.java:745)}}

      The commands we issued are as follows,

      {{import org.apache.spark.sql.SparkSession
      val sc = SparkSession.builder.getOrCreate()
      import sc.implicits._
      import sc.sqlContext.implicits._
      case class DataPoint(element: Long)
      val ds=spark.range(0,10,1,1).map(x => DataPoint)
      ds.collect().foreach(println)}}

      We were using the latest version of Toree which has the support for Spark 2.0.

      pip install https://dist.apache.org/repos/dist/dev/incubator/toree/0.2.0/snapshots/dev1/toree-pip/toree-0.2.0.dev1.tar.gz

      Attachments

        1. Screen Shot 2017-07-14 at 11.39.22 AM.png
          255 kB
          Josiah Samuel Sathiadass

        Activity

          People

            Unassigned Unassigned
            josam Josiah Samuel Sathiadass
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated: