Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-5004

FS: queue can use more than the max resources set

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Duplicate
    • 2.8.0
    • None
    • fairscheduler, yarn
    • None

    Description

      We found a case that the queue is using 301 vcores while the max is set to 300. The same for the memory usage. The documentation states (see hadoop 2.7.1 FairScheduler documentation on apache):
      -
      A queue will never be assigned a container that would put its aggregate usage over this limit.
      -
      This is clearly not correct in the documentation or the behaviour.

      Attachments

        Issue Links

          Activity

            People

              yufeigu Yufei Gu
              yufeigu Yufei Gu
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: