Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-32962

Spark Streaming

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Trivial
    • Resolution: Invalid
    • 2.4.5
    • None
    • DStreams
    • None

    Description

      Hey there,

      I'm using this spark streaming job which integrated with Kafka (and manage its offsets commitions at Kafka itself),

      The problem is when I have a failure I want to repeat the work on  those offset ranges (that something went wrong with them) , therefore I catch the exception and NOT commit (with commitAsync) this range.

      However I notice it keeps proceeding (without any commit made).

      moreover I removed later all the commitAsync calls and I the stream keep proceeding!

      I guess there might be any inner cache or something that helps the streaming job to consume the entries from Kafka.

       

      Could you please advice?

      Attachments

        Activity

          People

            Unassigned Unassigned
            amit.menashe Amit Menashe
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: