Details
-
Bug
-
Status: Resolved
-
Trivial
-
Resolution: Invalid
-
2.4.5
-
None
-
None
Description
Hey there,
I'm using this spark streaming job which integrated with Kafka (and manage its offsets commitions at Kafka itself),
The problem is when I have a failure I want to repeat the work on those offset ranges (that something went wrong with them) , therefore I catch the exception and NOT commit (with commitAsync) this range.
However I notice it keeps proceeding (without any commit made).
moreover I removed later all the commitAsync calls and I the stream keep proceeding!
I guess there might be any inner cache or something that helps the streaming job to consume the entries from Kafka.
Could you please advice?