Uploaded image for project: 'Apache Arrow'
  1. Apache Arrow
  2. ARROW-15724

[C++] Reduce directory and file IO when reading partition parquet dataset with partition key filters

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Open
    • Major
    • Resolution: Unresolved
    • None
    • None
    • C++

    Description

      Hi,
      It seems that Arrow accesses all partitions directories (and even each parquet files), including those clearly not matching with the partition key values in the filter criteria. This may cause multiple time of difference between accessing one partition directly vs accessing with partition key filters, 
      specially on Network file system, and on local file system when there are lots of partitions, e.g. 1/10th of second vs seconds.

      Attached some Python code to create example dataframe and save parquet datasets with different hive partition structures (/y=/m=/d=, or /y=/m=, or /dk=). And read the datasets with/without filters to reproduce the issue. Observe the run time, and the directories and files which are accessed by the process in Process Monitor on Windows.

      In the three partition structures, I saw in Process Monitor that all directories are accessed regardless of use_legacy_dataset=True or False.
      When use_legacy_dataset=False, the parquet files in all directories were opened and closed. 
      The argument validate_schema=False made small time difference, but still opens the partition directories, and it's only supported when use_legacy_dataset=True, and not supported/passed in from pandas read_parquet wrapper API. 

      The /y=/m= is faster because there is no daily partition so less directories and files.

      There was a related another stackoverflow question and example https://stackoverflow.com/questions/66339381/pyarrow-read-single-file-from-partitioned-parquet-dataset-is-unexpectedly-slow
      and there was a comment on the partition discovery:

      It should get discovered automatically. pd.read_parquet calls pyarrow.parquet.read_table and the default partitioning behavior should be to discover hive-style partitions (i.e. the ones you have). The fact that you have to specify this means that discovery is failing. If you could create a reproducible example and submit it to Arrow JIRA it would be helpful. 
      – Pace  Feb 24 2021 at 18:55"

      Wonder if there were some related Jira here already.
      I tried passing in partitioning argument, but it didn't help. 
      The version of pyarrow used were 1.01, 5, and 7.

      Attachments

        1. pq.py
          5 kB
          Yin

        Issue Links

          Activity

            People

              Unassigned Unassigned
              zyd02 Yin
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated: