This is a follow up of https://issues.apache.org/jira/browse/SPARK-23243
To completely fix that problem, Spark needs to be able to rollback a shuffle map stage and rerun all the map tasks.
According to https://github.com/apache/spark/pull/9214 , Spark doesn't support it currently, as in shuffle writing "first write wins".
Since overwriting shuffle files is hard, we can extend the shuffle id to include a "shuffle generation number". Then the reduce task can specify which generation of shuffle it wants to read. https://github.com/apache/spark/pull/6648 seems in the right direction.