Details
-
Documentation
-
Status: Resolved
-
Minor
-
Resolution: Fixed
-
2.4.3
-
None
Description
https://spark.apache.org/docs/2.4.3/sql-data-sources-jdbc.html
Specifically in the partitionColumn section, this page says:
"partitionColumn must be a numeric, date, or timestamp column from the table in question."
But then in this doc: https://spark.apache.org/docs/2.4.3/api/scala/index.html#org.apache.spark.sql.DataFrameReader
in def jdbc(url: String, table: String, columnName: String, lowerBound: Long, upperBound: Long, numPartitions: Int, connectionProperties: Properties): DataFrame
we have:
columnName
the name of a column of integral type that will be used for partitioning.
This appears to go back pretty far, to 1.6.3, but I'm not sure when this was accurate.
Attachments
Issue Links
- links to