Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-37488

With enough resources, the task may still be permanently pending

    XMLWordPrintableJSON

Details

    • Bug
    • Status: In Progress
    • Major
    • Resolution: Unresolved
    • 3.0.3, 3.1.2, 3.2.0
    • None
    • Scheduler, Spark Core
    • None
    • Spark 3.1.2,Default Configuration

    Description

      // The online environment is actually hive partition data imported to tidb, the code logic can be simplified as follows
          SparkSession testApp = SparkSession.builder()
              .master("local[*]")
              .appName("test app")
              .enableHiveSupport()
              .getOrCreate();
          Dataset<Row> dataset = testApp.sql("select * from default.test where dt = '20211129'");
          dataset.persist(StorageLevel.MEMORY_AND_DISK());
          dataset.count();
      

      I have observed that tasks are permanently blocked and reruns can always be reproduced.

      Since it is only reproducible online, I use the arthas runtime to see the status of the function entries and returns within the TaskSetManager.
      https://gist.github.com/guiyanakuang/431584f191645513552a937d16ae8fbd

      NODE_LOCAL level, because the persist function is called, the pendingTasks.forHost has a collection of pending tasks, but it points to the machine where the block of partitioned data is located, and since the only resource spark gets is the driver. In this case, it cannot be scheduled. getAllowedLocalityLevel gives the wrong runlevel, so it cannot be run with TaskLocality.Any

      The task pending permanently because the scheduling time is very short and it is too late to raise the runlevel with a timeout.

      Attachments

        Activity

          People

            Unassigned Unassigned
            Guiyankuang Yiqun Zhang
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated: