Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-33005 Kubernetes GA Preparation
  3. SPARK-33711

Race condition in Spark k8s Pod lifecycle manager that leads to shutdowns

    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.3.4, 2.4.7, 3.0.0, 3.1.0, 3.2.0
    • 3.0.2, 3.1.1, 3.2.0
    • Kubernetes
    • None

    Description

      Watching a POD (ExecutorPodsWatchSnapshotSource) informs about single POD changes which could wrongfully lead to detecting of missing PODs (PODs known by scheduler backend but missing from POD snapshots) by the executor POD lifecycle manager.

      A key indicator of this is seeing this log msg:

      "The executor with ID [some_id] was not found in the cluster but we didn't get a reason why. Marking the executor as failed. The executor may have been deleted but the driver missed the deletion event."

      So one of the problem is running the missing POD detection even when a single pod is changed without having a full consistent snapshot about all the PODs (see ExecutorPodsPollingSnapshotSource). The other could be a race between the executor POD lifecycle manager and the scheduler backend.

      Attachments

        Activity

          People

            attilapiros Attila Zsolt Piros
            attilapiros Attila Zsolt Piros
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: