Uploaded image for project: 'Beam'
  1. Beam
  2. BEAM-14540

Native implementation for serialized Rows to/from Arrow

Details

    • Improvement
    • Status: Open
    • P2
    • Resolution: Unresolved
    • None
    • None
    • sdk-py-core
    • None

    Description

      With https://s.apache.org/batched-dofns (BEAM-14213), we want to encourage users to develop pipelines that process arrow data within the Python SDK, but communicating batches of data across SDKs or from SDK to Runner is left as future work. So when Arrow data is processed in the SDK, it must be converted to/from Rows for transmission over the Fn API. So the current ideal Python execution looks like:

      1. read row oriented data over the Fn API, deserialize with SchemaCoder
      2. Buffer rows and construct an arrow RecordBatch/Table object
      3. Perform user computation(s)
      4. Explode output RecordBatch/Table into rows
      5. Serialize rows with SchemaCoder and write out over the Fn API

      Note that (1,2) and (4,5) will exist in every stage of the user's pipeline, and they'll also exist when Python transforms (e.g. dataframe read_csv) are used in other SDKs. We should improve performance for this hot path by making a native (cythonized) implementation for (1,2) and (4,5).

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              bhulette Brian Hulette
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated: