Uploaded image for project: 'Maven Surefire'
  1. Maven Surefire
  2. SUREFIRE-2151

Inconsistent console reporter output on failures for parameterized tests, with/without rerunFailingTestsCount

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 3.0.0-M9
    • None
    • Maven Surefire Plugin
    • None

    Description

      The way in which test-failures are being displayed with the console-reporter is not ideal and partly inconsistent, in particular for (e.g. JUnit5) parameterized tests.

      Taking a small (JUnit5) snippet of a dummy-test as example:

      public class DummyTest {
        @ParameterizedTest
        @CsvSource({"yes", "no", "yes", "yes", "no"})
        public void dummyTest(String param) {
          testInternal(param);
        }
      
        private void testInternal(String arg) {
          if (arg.equals("no")) {
            Assertions.fail("If you say 'no', it's a no");
          }
        }
      }

      Running this with surefire will display an error like this (the summary in the end):

      [...]
      
      [INFO] Results:
      [INFO]
      [ERROR] Failures:
      [ERROR]   DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
      [ERROR]   DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
      [INFO]
      [ERROR] Tests run: 5, Failures: 2, Errors: 0, Skipped: 0
      [INFO]
      [INFO] ------------------------------------------------------------------------
      [INFO] BUILD FAILURE
      [INFO] ------------------------------------------------------------------------
      
      [...]

      The failures do show parts of the problematic code-path, but don't have any information about the actual invocations of the parameterized tests that failed (in the example, invocations 2+5 of the 5). And while it is possible to see more details in the stack traces (i.e. scrolling up in the output), it would be quite nice see more details right away.

      If rerunFailingTestsCount is used (here with value 2), the output does show more details right away - namely the actual problematic invocations:

      [...]
      
      [INFO] Results:
      [INFO]
      [ERROR] Failures:
      [ERROR] test.DummyTest.dummyTest(String)[2]
      [ERROR]   Run 1: DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
      [ERROR]   Run 2: DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
      [ERROR]   Run 3: DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
      [INFO]
      [ERROR] test.DummyTest.dummyTest(String)[5]
      [ERROR]   Run 1: DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
      [ERROR]   Run 2: DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
      [ERROR]   Run 3: DummyTest.dummyTest:16->testInternal:21 If you say 'no', it's a no
      [INFO]
      [INFO]
      [ERROR] Tests run: 5, Failures: 2, Errors: 0, Skipped: 0
      [INFO]
      [INFO] ------------------------------------------------------------------------
      [INFO] BUILD FAILURE
      [INFO] ------------------------------------------------------------------------
      
      [...] 

      In fact, this is currently the main reason for us to even use the rerunFailingTestsCount flag - regardless of what that flag is actually meant for - which feels rather weird.

       

      Would it make sense to align this somehow?

      Attachments

        Activity

          People

            Unassigned Unassigned
            rweires Ralph Weires
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated: