Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-20624 SPIP: Add better handling for node shutdown
  3. SPARK-35533

Do not drop cached RDD blocks to accommodate blocks from decommissioned block manager if enough memory is not available

    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: In Progress
    • Major
    • Resolution: Unresolved
    • 3.1.1
    • None
    • Spark Core
    • None

    Description

      In current block manager decommissioning flow, existing cached blocks in memory are dropped if enough memory is not available to accommodate blocks from decommissioned block manager.

       

      Why should blocks from a decommissioned block manager have more priority than an already cached block? 

      We should place blocks from decommission block manager on a peer block manager only when enough memory is available

      Attachments

        Activity

          People

            Unassigned Unassigned
            abhishek_tiwari abhishek kumar tiwari
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated: