Uploaded image for project: 'Zeppelin'
  1. Zeppelin
  2. ZEPPELIN-3433

Incorrect status shown for %spark2 interpreter

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Blocker
    • Resolution: Invalid
    • 0.8.0
    • 0.8.0
    • zeppelin-server
    • None

    Description

      %spark2 interpreter is showing the status as 'Finished' even when it has actually failed.

      Steps to reproduce :
      1) Create spark.conf with incorrect jar file.

      %spark2.conf
      spark.jars /tmp/zep_test/spark2-examples-assembl-2.3.0.3.0.0.0-1260.jar
      spark.app.name test_new_1
      spark.executor.instances 2
      

      2) Now create spark context by running any spark statement.

      %spark
      sc.version
      

      3) Though spark context is not created and para didnt run successfully, the status is shown as 'Finished' without showing error message

      Below exceptions are seen in zeppelin logs, but are not shown on the UI.

      INFO [2018-04-27 14:58:55,014] ({pool-3-thread-2} Logging.scala[logInfo]:54) - Bound SparkUI to 0.0.0.0, and started at http://ctr-e138-1518143905142-267605-01-000002.hwx.site:4040
      ERROR [2018-04-27 14:58:55,030] ({pool-3-thread-2} Logging.scala[logError]:91) - Failed to add /tmp/zep_test/spark2-examples-assembl-2.3.0.3.0.0.0-1260.jar to Spark environment
      java.io.FileNotFoundException: Jar /tmp/zep_test/spark2-examples-assembl-2.3.0.3.0.0.0-1260.jar not found
      	at org.apache.spark.SparkContext.addJarFile$1(SparkContext.scala:1814)
      	at org.apache.spark.SparkContext.addJar(SparkContext.scala:1842)
      	at org.apache.spark.SparkContext$$anonfun$12.apply(SparkContext.scala:457)
      	at org.apache.spark.SparkContext$$anonfun$12.apply(SparkContext.scala:457)
      	at scala.collection.immutable.List.foreach(List.scala:381)
      	at org.apache.spark.SparkContext.<init>(SparkContext.scala:457)
      	at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2493)
      	at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:930)
      	at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:921)
      	at scala.Option.getOrElse(Option.scala:121)
      	at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921)
      	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
      	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      	at java.lang.reflect.Method.invoke(Method.java:498)
      	at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:44)
      	at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:39)
      	at org.apache.zeppelin.spark.OldSparkInterpreter.createSparkSession(OldSparkInterpreter.java:345)
      	at org.apache.zeppelin.spark.OldSparkInterpreter.getSparkSession(OldSparkInterpreter.java:219)
      	at org.apache.zeppelin.spark.OldSparkInterpreter.open(OldSparkInterpreter.java:738)
      	at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:61)
      	at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
      	at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:611)
      	at org.apache.zeppelin.scheduler.Job.run(Job.java:186)
      	at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:140)
      	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
      	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
      	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      	at java.lang.Thread.run(Thread.java:748)
       WARN [2018-04-27 14:58:55,074] ({pool-3-thread-2} Logging.scala[logWarning]:66) - Fair Scheduler configuration file not found so jobs will be scheduled in FIFO order. To use fair scheduling, configure pools in fairscheduler.xml or set spark.scheduler.allocation.file to a file that contains the configuration.
      

      Correct status(ERROR) is shown for livy2 interpreter in this scenario. Issue is happening only with spark2 interpreter.

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              ssharma@hortonworks.com Supreeth Sharma
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: