Details
-
Bug
-
Status: Open
-
Major
-
Resolution: Unresolved
-
1.9.2, 2.0.0
-
None
Description
TL;DR: increase default retry/back-off to 500ms and 6 retries.
In Apache Brooklyn (which uses jclouds), we hit Request limit exceeded when provisioning VMs in aws-ec2 [1]. We were provisioning multiple machines concurrently: different threads were independently calling createNodesInGroup. The default exponential backoff and retry within jclouds wasn't enough.
My understanding is that AWS will rate-limit based on the nature (as well as number) of API calls. For example, if creating/modifying security groups is a more expensive operation (from AWS's perspective) than a simple poll for a machine's state, then those requests would cause rate-limiting sooner.
Within jclouds, the defaults are retryCountLimit = 5 and delayStart = 50ms (see [2]).
This means we retry with the back-offs being (approximately) 50ms, 100ms, 200ms, 400ms and 500ms.
We overrode the defaults to be 500ms and 6 retries, and could then successfully provision 20 VMs concurrently. Six of the 20 calls to RunInstances were rate-limited. It took several retries before the request was accepted, having to back off for more than 4 seconds in some cases.
At worst, the existing short back-off may make things worse (the overly aggressive retry might cause other concurrent calls to also be rate-limited).
At best, the short back-off just isn't long enough so that particular VM provisioning fails. For example, if AWS uses a leaky bucket algorithm [3] then hopefully some requests would keep on getting through. But AWS don't publicise such details of their algorithm/implementation, I believe.
[1] https://issues.apache.org/jira/browse/BROOKLYN-394
[2] https://github.com/jclouds/jclouds/blob/rel/jclouds-2.0.0/core/src/main/java/org/jclouds/http/handlers/BackoffLimitedRetryHandler.java#L81-#L87
[3] http://en.wikipedia.org/wiki/Leaky_bucket