Details

    Description

      We need to ensure that all exec nodes can support rows larger than the default page size. The default page size will be a query option, so users can always increase that, however minimum memory requirements will scale proportionally, which makes this less appealing.

      We should also add a max_row_size query option that controls the maximum size of rows supported by operators (at least those that use the reservation mechanism). We should be able to support large rows with only a single read and write buffer of the max row size. I.e. the minimum requirement for an operator would be ((min_buffers -2) * default_buffer_size) + 2 * max_row_size. This requires the following changes to the operators:

      BufferedTupleStream changes:

      • Rows <= the default page size are written as before
      • Rows that don't fit in the default page size get written into a larger page, with one row per page.
      • Upon writing a large row to an unpinned stream, the page is immediately unpinned and we immediately advance to the next write page, so that the large page is not kept pinned outside of the AddRow() call.
      • We should only be reading from one unpinned stream at a time, so only one large page is required there.

      Sorter changes:

      • Use buffers as large as the largest supported row.

      Testing:
      Needs end-to-end tests exercising all operators with large operators

      Attachments

        Issue Links

          Activity

            People

              tarmstrong Tim Armstrong
              tarmstrong Tim Armstrong
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: