Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-28484

spark-submit uses wrong SPARK_HOME with deploy-mode "cluster"

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Incomplete
    • 2.4.3
    • None
    • Deploy

    Description

      When submitting an application jar to a remote Spark cluster with spark-submit and deploy-mode = "cluster", the driver command that is issued on one of the workers seems to be configured with the SPARK_HOME of the local machine, from which spark-submit was called, not the one where the driver is actually running.

       

      I.e. if I have spark installed locally under e.g. /opt/apache-spark and hadoop under /usr/lib/hadoop-3.2.0, but the cluster administrator installs spark under /usr/local/spark on the workers, the command that is issued on the worker still looks sth like this:

       

      "/usr/lib/jvm/java/bin/java" "-cp" "/opt/apache-spark/conf:/etc/hadoop:/usr/lib/hadoop-3.2.0/share/hadoop/common/lib/:/usr/lib/hadoop-3.2.0/share/hadoop/common/:/usr/lib/hadoop-3.2.0/share/hadoop/hdfs:/usr/lib/hadoop-3.2.0/share/hadoop/hdfs/lib/:/usr/lib/hadoop-3.2.0/share/hadoop/hdfs/:/usr/lib/hadoop-3.2.0/share/hadoop/mapreduce/lib/:/usr/lib/hadoop-3.2.0/share/hadoop/mapreduce/:/usr/lib/hadoop-3.2.0/share/hadoop/yarn:/usr/lib/hadoop-3.2.0/share/hadoop/yarn/lib/:/usr/lib/hadoop-3.2.0/share/hadoop/yarn/" "-Xmx1024M" "-Dspark.jars=file:///some/application.jar" "-Dspark.driver.supervise=false" "-Dspark.submit.deployMode=cluster" "-Dspark.master=spark://<SPARK_MASTER>:7077" "-Dspark.app.name=<APPNAME>" "-Dspark.rpc.askTimeout=10s" "org.apache.spark.deploy.worker.DriverWrapper" "spark://Worker@<WORKER_HOST>:65000" "/some/application.jar" "some.class.Name"

       

      Is this expected behavior and/or can I somehow control that?

       

      Steps to reproduce:

       

      1. Install Spark locally (with a SPARK_HOME that's different on the cluster)

      2. Run: spark-submit --deploy-mode "cluster" --master "spark://spark.example.com:7077" --class "com.example.SparkApp" "hdfs:/some/application.jar"

      3. Observe that the application fails because some spark and/or hadoop classes cannot be found

       

      This applies to Spark Standalone, I haven't tried with YARN

      Attachments

        Activity

          People

            Unassigned Unassigned
            kalle Karl-Johan Wettin
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: