Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-38960

Spark should fail fast if initial memory too large(set by "spark.executor.extraJavaOptions") for executor to start

Log workAgile BoardRank to TopRank to BottomAttach filesAttach ScreenshotBulk Copy AttachmentsBulk Move AttachmentsVotersWatch issueWatchersCreate sub-taskConvert to sub-taskMoveLinkCloneLabelsUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete CommentsDelete
    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Minor
    • Resolution: Won't Fix
    • 3.4.0
    • None
    • Spark Core, Spark Submit, YARN
    • None

    Description

      if you set initial memory(set by "spark.executor.extraJavaOptions=-Xms{XXX}G" ) larger than maximum memory(set by "spark.executor.memory")

      Eg.

           spark.executor.memory=1G

           spark.executor.extraJavaOptions=-Xms2G

       

      from the driver process you just see executor failures with no warning, since the more meaningful errors are buried in the executor logs. 

      Eg., on Yarn, you see:

      Error occurred during initialization of VM
      Initial heap size set to a larger value than the maximum heap size

      Instead we should just fail fast with a clear error message in the driver logs.

      Attachments

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            Unassigned Unassigned Assign to me
            panbingkun BingKun Pan
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment