Details
-
Improvement
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
-
None
Description
Logging this issue based on offline discussion with catholicon.
Currently OakDirectory stores files in chunk of 1 MB each. So a 1 GB file would be stored in 1000+ chunks of 1 MB.
This design was done to support direct usage of OakDirectory with Lucene as Lucene makes use of random io. Chunked storage allows it to seek to random position quickly. If the files are stored as Blobs then its only possible to access via streaming which would be slow
As most setup now use copy-on-read and copy-on-write support and rely on local copy of index we can have an implementation which stores the file as single blob.
Pros
- Quite a bit of reduction in number of small blobs stored in BlobStore. Which should reduce the GC time specially for S3
- Reduced overhead of storing a single file in repository. Instead of array of 1k blobids we would be stored a single blobid
- Potential improvement in IO cost as file can be read in one connection and uploaded in one.
Cons
It would not be possible to use OakDirectory directly (or would be very slow) and we would always need to do local copy.
Attachments
Attachments
Issue Links
- is related to
-
OAK-3132 Reduce memory usage of OakIndexFile
- Resolved
- relates to
-
OAK-2808 Active deletion of 'deleted' Lucene index files from DataStore without relying on full scale Blob GC
- Closed
-
OAK-5192 Reduce Lucene related growth of repository size
- Closed
- requires
-
OAK-6576 Refactor OakDirectory to be more manageable
- Closed