Details
-
Bug
-
Status: Open
-
Major
-
Resolution: Unresolved
-
1.16.0
-
None
Description
If I set the value of fs.default-scheme to other scheme and run a job on yarn.
I got this exception:
Caused by: java.io.FileNotFoundException: File does not exist: /tmp/application_1667097340114_0001-flink-conf.yaml9110057305807570477.tmp at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1128) at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120) at org.apache.flink.yarn.YarnApplicationFileUploader.registerSingleLocalResource(YarnApplicationFileUploader.java:168) at org.apache.flink.yarn.YarnClusterDescriptor.startAppMaster(YarnClusterDescriptor.java:1047) at org.apache.flink.yarn.YarnClusterDescriptor.deployInternal(YarnClusterDescriptor.java:623) at org.apache.flink.yarn.YarnClusterDescriptor.deployJobCluster(YarnClusterDescriptor.java:490) ... 24 more
I think the cause of this problem may be
tmpConfigurationFile.getAbsolutePath()
Attachments
Attachments
Issue Links
- duplicates
-
FLINK-33424 Resolved an issue in YarnClusterDescriptor where temporary files created locally by flink-conf.yaml are treated as remote files
- Closed
- fixes
-
FLINK-33424 Resolved an issue in YarnClusterDescriptor where temporary files created locally by flink-conf.yaml are treated as remote files
- Closed
- relates to
-
FLINK-33472 Solve the problem that the temporary file of flink-conf.yaml in S3AFileSystem cannot be uploaded
- Open
- links to