Description
Currently AWS S3 throttling is initially handled in the AWS SDK, only reaching the S3 client code after it has given up.
This means we don't always directly observe when throttling is taking place.
Proposed:
- disable throttling retries in the AWS client library
- add a quantile for the S3 throttle events, as DDB has
- isolate counters of s3 and DDB throttle events to classify issues better
Because we are taking over the AWS retries, we will need to expand the initial delay en retries and the number of retries we should support before giving up.
Also: should we log throttling events? It could be useful but there is a risk of logs overloading especially if many threads in the same process were triggering the problem.
Proposed: log at debug.
Note: if S3 bucket logging is enabled then throttling events will be recorded as 503 responses in the logs. If the hadoop version contains the audit logging of HADOOP-17511, this can be used to identify operations/jobs/users which are triggering problems.
Attachments
Issue Links
- is related to
-
HADOOP-13811 s3a: getFileStatus fails with com.amazonaws.AmazonClientException: Failed to sanitize XML document destined for handler class
- Resolved
-
HADOOP-17935 Spark job stuck in S3A StagingCommitter::setupJob
- Resolved
- links to