Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-14141

Let user specify datatypes of pandas dataframe in toPandas()

    XMLWordPrintableJSON

Details

    • New Feature
    • Status: Resolved
    • Minor
    • Resolution: Incomplete
    • None
    • None
    • Input/Output, PySpark, SQL

    Description

      Would be nice to specify the dtypes of the pandas dataframe during the toPandas() call. Something like:

      pdf = df.toPandas(dtypes={'a': 'float64', 'b': 'datetime64', 'c': 'bool', 'd': 'category'})

      Since dtypes like `category` are more memory efficient, you could potentially load many more rows into a pandas dataframe with this option without running out of memory.

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              lminer Luke Miner
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: