Uploaded image for project: 'Hadoop Map/Reduce'
  1. Hadoop Map/Reduce
  2. MAPREDUCE-5308

Shuffling to memory can get out-of-sync when fetching multiple compressed map outputs

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 2.0.3-alpha, 0.23.8
    • 2.1.0-beta, 0.23.9
    • None
    • None
    • Reviewed

    Description

      When a reducer is fetching multiple compressed map outputs from a host, the fetcher can get out-of-sync with the IFileInputStream, causing several of the maps to fail to fetch.

      This occurs because decompressors can return all the decompressed bytes before actually processing all the bytes in the compressed stream (due to checksums or other trailing data that we ignore). In the unfortunate case where these extra bytes cross an io.file.buffer.size boundary, some extra bytes will be left over and the next map_output will not fetch correctly (usually due to an invalid map_id).

      This scenario is not typically fatal to a job because the failure is charged to the map_output immediately following the "bad" one and the subsequent retry will normally work.

      Attachments

        1. MAPREDUCE-5308-branch-0.23.txt
          6 kB
          Nathan Roberts
        2. MAPREDUCE-5308.patch
          6 kB
          Nathan Roberts

        Activity

          People

            nroberts Nathan Roberts
            nroberts Nathan Roberts
            Votes:
            0 Vote for this issue
            Watchers:
            9 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: