Details
-
Bug
-
Status: Open
-
Critical
-
Resolution: Unresolved
-
2.4.0
-
None
-
None
-
AWS Glue
Description
We are integrating Spark 2.4 with our AWS Glue ETL job.
And we recently realized that a lot of our jobs are failed because of the below error:
{{Exception in User Class: java.lang.RuntimeException : Caught Hive MetaException attempting to get partition metadata by filter from Hive. You can set the Spark configuration setting spark.sql.hive.manageFilesourcePartitions to false to work around this problem, however this will result in degraded performance. Please report a bug: }}{{https://issues.apache.org/jira/browse/SPARK
}}
This error first happens from Aug 30th, and it occurs from time to time. It is gone for several hours and occurs again. During the time that the error occurs, most of the jobs fail, only few succeed.
And we tried to set spark.sql.hive.manageFilesourcePartitions to false but it did not work out. Some other issue is coming out.
Can you look into the error and let me know if there is any work around to mitigate the issue?
Let me know if you need anything from my end.