Uploaded image for project: 'Hadoop Map/Reduce'
  1. Hadoop Map/Reduce
  2. MAPREDUCE-7403

Support spark dynamic partitioning in the Manifest Committer

    XMLWordPrintableJSON

Details

    Description

      Currently the spark integration with PathOutputCommitters rejects attempt to instantiate them if dynamic partitioning is enabled. That is because the spark partitioning code assumes that

      1. file rename works as a fast and safe commit algorithm
      2. the working directory is in the same FS as the final directory

      Assumption 1 doesn't hold on s3a, and #2 isn't true for the staging committers.

      The new abfs/gcs manifest committer and the target stores do meet both requirements. So we no longer need to reject the operation, provided the spark side binding-code can can identify when all is good.

      Proposed: add a new hasCapability() probe which, if, a committer implements StreamCapabilities can be used to see if the committer will work. ManifestCommitter will declare that it holds. As the API has existed since 2.10, it will be immediately available.

      spark's PathOutputCommitProtocol to query the committer in setupCommitter, and fail if dynamicPartitionOverwrite is requested but not available.

      BindingParquetOutputCommitter to implement and forward StreamCapabilities.hasCapability.

      Attachments

        Issue Links

          Activity

            People

              stevel@apache.org Steve Loughran
              stevel@apache.org Steve Loughran
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: