Details
Description
Issue
val lines = ssc.socketTextStream("localhost", 9999) val words = lines.flatMap(_.split(" ")) val mappedStream= words.transform(rdd => { val c = rdd.count(); rdd.map(x => s"$c x")} ) mappedStream.foreachRDD(rdd => rdd.foreach(x => println(x)))
Every batch two spark jobs are created. Only the second one is associated with the streaming output operation and shows at batch page.
Investigation
The first action rdd.count() is invoked by JobGenerator.generateJobs. Batch time and output op id are not available in spark context because they are set in JobScheduler later.
Proposal
delegate dstream.getOrCompute to JobScheduler so that all rdd actions can run in spark context with correct local properties.