Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-20905

When running spark with yarn-client, large executor-cores will lead to bad performance.

    XMLWordPrintableJSON

Details

    • Question
    • Status: Resolved
    • Major
    • Resolution: Invalid
    • 2.0.0
    • None
    • Examples
    • None

    Description

      Hi, all:
      When I run a training job in spark with yarn-client, and set executor-cores=20(less than vcores=24) and executor-num=4(my cluster has 4 slaves), then there will be always one node computing time is larger than others.

      I checked some blogs, and they says executor-cores should be set less than 5 if there are tons of concurrency threads. I tried to set executor-cores=4, and executor-num=20, then it worked.

      But I don't know why, can you give some explain? Thank you very much.

      Attachments

        Activity

          People

            Unassigned Unassigned
            Cherry2017 Cherry Zhang
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: