(copied from dev thread)
| +1 | hadoopcheck | 52m 1s | Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. |
Almost 1 hr to check against 10 versions. And it's only going to increase as more 2.6.x, 2.7.x and 3.0.x releases come out.
Suggestion here is simple, let's check against only the latest maintenance release for each minor version i.e. 2.6.5, 2.7.4 and 3.0.0-alpha4.
Advantage: Save ~40 min on pre-commit time.
- We only do compile checks. Maintenance releases are not supposed to be doing API breaking changes. So checking against maintenance release for each minor version should be enough.
- We rarely see any hadoop check -1, and most recent ones have been due to 3.0. These will still be caught.
- Nightly can still check against all hadoop versions (since nightlies are supposed to do holistic testing)
- Analyzing 201 precommits from 10100 (11/29) - 10300 (12/8) (10 days):
138 had +1 hadoopcheck
15 had -1 hadoopcheck
(others probably failed even before that - merge issue, etc)
Spot checking some failures:(10241,10246,10225,10269,10151,10156,10184,10250,10298,10227,10294,10223,10251,10119,10230)
10241: All 2.6.x failed. Others didn't run
10246: All 10 versions failed.
10184: All 2.6.x and 2.7.x failed. Others didn't run
10223: All 10 versions failed
10230: All 2.6.x failed. Others didn't run
Common pattern being, all maintenance versions fail together.
(idk, why sometimes 2.7.* are not reported if 2.6.* fail, but that's irrelevant to this discussion).
What do you say - only check latest maintenance releases in precommit (and let nightlies do holistic testing against all versions)?