Reduced fetch.unpackLimit to minimum #369
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Scalar has already been managing multiple pack files situation with
multi-pack-index. This means that adding packs to the git repo is
relatively low cost.
Reducing
fetch.unpackLimitto 1 (from default value of 100) assuresthat all the fetch will store transferred pack as-is on disk, instead
of spending computation to unpack data into loose objects. This
should make fetching a little bit faster for large fetch that could
be unpacked into many objects (i.e. 99 objects since the default ).
These objects, instead of being unpacked and written onto disk, is then
kept as packfile and get consolidated with incremental repack step using
multi-index-pack.
On a busy repo, this potentially might make incremental repack with
multi-pack-index works a bit harder. But I think this worth the
computation trade-off as this is a cost we pay at the background
maintenance, and it allows fetches to run faster to keep the repo
up-to-date.