Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-25714

Null Handling in the Optimizer rule BooleanSimplification

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Blocker
    • Resolution: Fixed
    • 1.6.3, 2.0.2, 2.1.3, 2.2.2, 2.3.2, 2.4.0
    • 2.2.3, 2.3.3, 2.4.0
    • SQL

    Description

      scala> val df = Seq(("abc", 1), (null, 3)).toDF("col1", "col2")
      df: org.apache.spark.sql.DataFrame = [col1: string, col2: int]
      
      scala> df.write.mode("overwrite").parquet("/tmp/test1")
                                                                                      
      scala> val df2 = spark.read.parquet("/tmp/test1");
      df2: org.apache.spark.sql.DataFrame = [col1: string, col2: int]
      
      scala> df2.filter("col1 = 'abc' OR (col1 != 'abc' AND col2 == 3)").show()
      +----+----+
      |col1|col2|
      +----+----+
      | abc|   1|
      |null|   3|
      +----+----+
      

      Attachments

        Activity

          People

            smilegator Xiao Li
            smilegator Xiao Li
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: