Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-28977

JDBC Dataframe Reader Doc Doesn't Match JDBC Data Source Page

    XMLWordPrintableJSON

Details

    • Documentation
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • 2.4.3
    • 2.4.5, 3.0.0
    • Documentation
    • None

    Description

      https://spark.apache.org/docs/2.4.3/sql-data-sources-jdbc.html

      Specifically in the partitionColumn section, this page says:

      "partitionColumn must be a numeric, date, or timestamp column from the table in question."

       

      But then in this doc: https://spark.apache.org/docs/2.4.3/api/scala/index.html#org.apache.spark.sql.DataFrameReader

      in def jdbc(url: String, table: String, columnName: String, lowerBound: Long, upperBound: Long, numPartitions: Int, connectionProperties: Properties): DataFrame

      we have:

      columnName

      the name of a column of integral type that will be used for partitioning.

       

      This appears to go back pretty far, to 1.6.3, but I'm not sure when this was accurate.

      Attachments

        Issue Links

          Activity

            People

              srowen Sean R. Owen
              chrisfish Christopher Hoshino-Fish
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: