Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-18477 Über-jira: S3A Hadoop 3.3.9 features
  3. HADOOP-15224

builld up md5 checksum as blocks are built in S3ABlockOutputStream; validate upload

    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Open
    • Minor
    • Resolution: Unresolved
    • 3.0.0
    • None
    • fs/s3
    • None

    Description

      rdblue reports sometimes he sees corrupt data on S3. Given MD5 checks from upload to S3, its likelier to have happened in VM RAM, HDD or nearby.

      If the MD5 checksum for each block was built up as data was written to it, and checked against the etag RAM/HDD storage of the saved blocks could be removed as sources of corruption

      The obvious place would be org.apache.hadoop.fs.s3a.S3ADataBlocks.DataBlock

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              stevel@apache.org Steve Loughran
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

                Created:
                Updated: