Details
-
Bug
-
Status: Open
-
Minor
-
Resolution: Unresolved
-
0.2.0
-
None
-
None
-
ppcle64
Description
When we tried to use Jupyter Notebook with Apache Toree kernel, we couldn't get this working for DataSet specially with "case class" as it throws ClassCastException as follows,
{{Name: org.apache.spark.SparkException
Message: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost): java.lang.ClassCastException: $line45.$read$$iw$$iw$DataPoint cannot be cast to $line45.$read$$iw$$iw$DataPoint
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:246)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:240)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)}}
The commands we issued are as follows,
{{import org.apache.spark.sql.SparkSession
val sc = SparkSession.builder.getOrCreate()
import sc.implicits._
import sc.sqlContext.implicits._
case class DataPoint(element: Long)
val ds=spark.range(0,10,1,1).map(x => DataPoint)
ds.collect().foreach(println)}}
We were using the latest version of Toree which has the support for Spark 2.0.