Uploaded image for project: 'Flink'
  1. Flink
  2. FLINK-34557

When the Flink task ends in application mode, there may be issues with the Znode and HDFS files not being deleted

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 1.17.0, 1.16.2
    • None
    • None

    Description

      In Flink 1.16.2, we all use application mode to submit tasks to Yarn. However, there are several situations during use that result in Znode not being deleted and some files on HDFS not being deleted. These should be deleted after the task is stopped, otherwise it may cause some resource occupancy problems. Below are the several situations I have encountered:

      1. After the Flink task is submitted to the cluster, if there is a conflict or missing jar package, the task will be restarted multiple times by Yarn and ultimately fail to end. At this point, it will be found that the Znode persists, and there are files with corresponding appids in the '/.flink' directory and '/flink/recovery' directory in HDFS;
      2. When using the yarn kill command to kill a task, the task ends directly and the final state is killed, with the final result being the same as the first one;
      3. When the Flink task is disconnected from zk (we will not analyze the specific reason for the disconnection), if zk is disconnected from the jm container, the task will hang and be pulled back by yarn. When the last disconnection occurs, the task will eventually end and the same result as above will appear;

       

       
      Add:
      Through consulting with the community and other colleagues, we found that the community had previously raised the issue of Znode not being deleted. Later, by adding the closeAndCleanupAllData# method, it was uniformly deleted at the end of a highly available cluster. However, in the aforementioned situations, there are still issues with file and data residue. Among them, when using the yarn kill command, after successfully submitting a task to the cluster, Flink would indicate through console logs that there would be HDFS file residue after successfully submitting tasks to the cluster, however, I don't understand why the community did not improve this and instead retained the existence of this situation. At the same time, we believe that Znode residue should not exist, regardless of the task status, it must be cleaned up after stopping the task

      Attachments

        1. image-2024-03-01-15-38-48-396.png
          34 kB
          tanliang
        2. image-2024-03-01-15-39-13-953.png
          34 kB
          tanliang
        3. image-2024-03-01-15-39-39-524.png
          15 kB
          tanliang

        Issue Links

          Activity

            People

              Unassigned Unassigned
              tltheshy tanliang
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated: