* jc/autogc:
git-gc --auto: run "repack -A -d -l" as necessary.
git-gc --auto: restructure the way "repack" command line is built.
git-gc --auto: protect ourselves from accumulated cruft
git-gc --auto: add documentation.
git-gc --auto: move threshold check to need_to_gc() function.
repack -A -d: use --keep-unreachable when repacking
pack-objects --keep-unreachable
Export matches_pack_name() and fix its return value
Invoke "git gc --auto" from commit, merge, am and rebase.
Implement git gc --auto
This new option is meant to be used in conjunction with the
options "git repack -a -d" usually invokes the underlying
pack-objects with. When this option is given, objects unreachable
from the refs in packs named with --unpacked= option are added
to the resulting pack, in addition to the reachable objects that
are not in packs marked with *.keep files.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This adds a --threads=<n> parameter to 'git pack-objects' with
documentation.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Try to keep object with the same name hash together.
Suggested by Martin Koegler.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
With this, each thread get repeatedly assigned the next available chunk of
objects to process until the whole list is done. The idea is to have
reasonably small chunks so that all CPUs remain busy with a minimum
number of threads for as long as there is data to process.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
this is still rough, hence it is disabled by default. You need to compile
with "make THREADED_DELTA_SEARCH=1 ..." at the moment.
Threading is done on different portions of the object list to be
deltified. This is currently done by spliting the list into n parts and
then a thread is spawned for each of them. A better method would consist
of spliting the list into more smaller parts and have the n threads
pick the next part available.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is to help threadification of the delta search code, with a bonus
consistency check.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Not all objects are subject to deltification, so avoid carrying those
along, and provide the real count to progress display.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is based on Martin Koegler's idea to keep the object that
was successfully used as the base of the delta when it is about
to fall off the edge of the window. Instead of doing so only
for the objects at the edge of the window, this makes the window
a lru eviction mechanism. If an entry is used as a base, it is
moved to the last of the queue to be evicted.
This is a quick-and-dirty implementation, as it keeps the original
implementation of the data structure used for the window. This
originally was done as an array, not as an array of pointers,
because it was meant to be used as a cyclic FIFO buffer and a
plain array avoids an extra pointer indirection, while its FIFOness
eant that we are not "moving" the entries like this patch does.
The runtime from three versions were comparable. It seems to
make the resulting chain even shorter, which can only be good.
(stock "master") 15782196 bytes
chain length = 1: 2972 objects
chain length = 2: 2651 objects
chain length = 3: 2369 objects
chain length = 4: 2121 objects
chain length = 5: 1877 objects
...
chain length = 46: 490 objects
chain length = 47: 515 objects
chain length = 48: 527 objects
chain length = 49: 570 objects
chain length = 50: 408 objects
(with your patch) 15745736 bytes (0.23% smaller)
chain length = 1: 3137 objects
chain length = 2: 2688 objects
chain length = 3: 2322 objects
chain length = 4: 2146 objects
chain length = 5: 1824 objects
...
chain length = 46: 503 objects
chain length = 47: 509 objects
chain length = 48: 536 objects
chain length = 49: 588 objects
chain length = 50: 357 objects
(with this patch) 15612086 bytes (1.08% smaller)
chain length = 1: 4831 objects
chain length = 2: 3811 objects
chain length = 3: 2964 objects
chain length = 4: 2352 objects
chain length = 5: 1944 objects
...
chain length = 46: 327 objects
chain length = 47: 353 objects
chain length = 48: 304 objects
chain length = 49: 298 objects
chain length = 50: 135 objects
[jc: this is with code simplification follow-up from Nico]
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The code favoring shallower deltas when size is equal was triggered
only when previous delta was also cached. There should be no relation
between cached deltas and same sized deltas.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When a thin pack wants to send a tree object at "sub/dir", and
the commit that is common between the sender and the receiver
that is used as the base object has a subproject at that path,
we should not try to use the data at "sub/dir" of the base tree
as a tree object. It is not a tree to begin with, and more
importantly, the commit object there does not have to even
exist.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Not only are they unused, but the order in the function declaration
and the actual usage don't match.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
xmkstemp() performs error checking and prints a standard error message when
an error occur.
Signed-off-by: Luiz Fernando N. Capitulino <lcapitulino@mandriva.com.br>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit 5a235b5e was missing this little detail. Otherwise your pack
will explode.
Problem noted by Brian Downing.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The delta depth doesn't have to be stored in the global object array
structure since it is only used during the deltification pass.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This adds an option (--window-memory=N) and configuration variable
(pack.windowMemory = N) to limit the memory size of the pack-objects
delta search window. This works by removing the oldest unpacked objects
whenever the total size goes above the limit. It will always leave
at least one object, though, so as not to completely eliminate the
possibility of computing deltas.
This is an extra limit on top of the normal window size (--window=N);
the window will not dynamically grow above the fixed number of entries
specified to fill the memory limit.
With this, repacking a repository with a mix of large and small objects
is possible even with a very large window.
Cleaner and correct circular buffer handling courtesy of Nicolas Pitre.
Signed-off-by: Brian Downing <bdowning@lavos.net>
Acked-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a new try_delta heuristic. Don't bother trying to make a delta if
the target object size is much smaller (currently 1/32) than the source,
as it's very likely not going to get a match. Even if it does, you will
have to read at least 32x the size of the new file to reassemble it,
which isn't such a good deal. This leads to a considerable performance
improvement when deltifying a mix of small and large files with a very
large window, because you don't have to wait for the large files to
percolate out of the window before things start going fast again.
Signed-off-by: Brian Downing <bdowning@lavos.net>
Acked-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We already apply a bias on the initial delta attempt with max_size being
a function of the base object depth. This has the effect of favoring
shallower deltas even if deeper deltas could be smaller, and therefore
creating a wider delta tree (see commits 4e8da195 and c3b06a69).
This principle should also be applied to all delta attempts for the same
object and not only the first attempt. With this the criteria for the
best delta is not only its size but also its depth, so that a shallower
delta might be selected even if it is larger than a deeper one. Even if
some deltas get larger, they allow for wider delta trees making the
depth limit less quickly reached and therefore better deltas can be
subsequently found, keeping the resulting pack size even smaller.
Runtime access to the pack should also benefit from shallower deltas.
Testing on different repositories showed slighter faster repacks,
smaller resulting packs, and a much nicer curve for delta depth
distribution with no more peak at the maximum depth level.
Improvements are even more significant with smaller depth limits.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Change "try_delta" so that if it finds a delta that has the same size
but shallower depth than the existing delta, it will prefer the
shallower one. This makes certain delta trees vastly less deep.
Signed-off-by: Brian Downing <bdowning@lavos.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This uses "git-apply --whitespace=strip" to fix whitespace errors that have
crept in to our source files over time. There are a few files that need
to have trailing whitespaces (most notably, test vectors). The results
still passes the test, and build result in Documentation/ area is unchanged.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This patch unifies the write_index_file functions in
builtin-pack-objects.c and index-pack.c. As the name
"index" is overloaded in git, move in the direction of
using "idx" and "pack idx" when refering to the pack index.
There should be no change in functionality.
Signed-off-by: Geert Bosch <bosch@gnat.com>
Acked-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Two issues here:
1) git-repack -a --max-pack-size=10 on the GIT repo dies pretty quick.
There is a lot of confusion about deltas that were suposed to be
reused from another pack but that get stored undeltified due to pack
limit and object size doesn't match entry->size anymore. This test
is not really worth the complexity for determining when it is valid
so get rid of it.
2) If pack limit is reached, the object buffer is freed, including when
it comes from a cached delta data. In practice the object will be
stored in a subsequent pack undeltified, but let's make sure no
pointer to freed data subsists by clearing entry->delta_data.
I also reorganized that code a bit to make it more readable.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Creating deltas between big blobs is a CPU and memory intensive task.
In the writing phase, all (not reused) deltas are redone.
This patch adds support for caching deltas from the deltifing phase, so
that that the writing phase is faster.
The caching is limited to small deltas to avoid increasing memory usage very much.
The implemented limit is (memory needed to create the delta)/1024.
Signed-off-by: Martin Koegler <mkoegler@auto.tuwien.ac.at>
Signed-off-by: Junio C Hamano <junkio@cox.net>
If builtin-pack-objects runs out of memory while finding
the best deltas, it bails out with an error.
If the delta index creation fails (because there is not enough memory),
we can downgrade the error message to a warning and continue with the
next object.
Signed-off-by: Martin Koegler <mkoegler@auto.tuwien.ac.at>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Explain the special code for detecting a corner-case error,
and complain about --stdout & --max-pack-size being used together.
Signed-off-by: Dana L. How <danahow@gmail.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
As we do not consider objects marked as "no-delta" early, there
is no point to check if the other objects already in the delta
window are marked as such -- "no-delta" objects will not enter
the window to begin with.
Pointed out by Nico.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This teaches pack-objects to use .gitattributes mechanism so
that the user can specify certain blobs are not worth spending
CPU cycles to attempt deltification.
The name of the attrbute is "delta", and when it is set to
false, like this:
== .gitattributes ==
*.jpg -delta
they are always stored in the plain-compressed base object
representation.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Instead of giving a hash for grouping, pass fullname to add_object_entry().
I want to add "do not try deltifying this object" bit to object_entry based on
the settings in .gitattributes, and hashing the name down too early would
interfere with that plan.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Add --max-pack-size parsing and usage messages.
Upgrade git-repack.sh to handle multiple packfile names,
and build packfiles in GIT_OBJECT_DIRECTORY not GIT_DIR.
Update documentation.
Signed-off-by: Dana L. How <danahow@gmail.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Rewrite write_pack_file() to break to a new packfile
whenever write_object/write_one request it, and
correct the header's object count in the previous packfile.
Change write_index_file() to write an index
for just the objects in the most recent packfile.
Signed-off-by: Dana L. How <danahow@gmail.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
With --max-pack-size, generate the appropriate write limit
for each object and check against it before each group of writes.
Update delta usability rules to handle base being in a previously-
written pack. Inline sha1write_compress() so we know the
exact size of the written data when it needs to be compressed.
Detect and return write "failure".
Signed-off-by: Dana L. How <danahow@gmail.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Add "pack_size_limit", the limit specified by --max-pack-size,
"written_list", the list of objects written to the current pack,
and "nr_written", the number of objects in written_list.
Put "base_name" at file scope again and add forward declarations.
Move write_index_file() call from cnd_pack_objects() to
write_pack_file() since only the latter will know how
many times to call write_index_file().
Signed-off-by: Dana L. How <danahow@gmail.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Add config variables pack.compression and core.loosecompression ,
and switch --compression=level to pack-objects.
Loose objects will be compressed using core.loosecompression if set,
else core.compression if set, else Z_BEST_SPEED.
Packed objects will be compressed using --compression=level if seen,
else pack.compression if set, else core.compression if set,
else Z_DEFAULT_COMPRESSION. This is the "pack compression level".
Loose objects added to a pack undeltified will be recompressed
to the pack compression level if it is unequal to the current
loose compression level by the preceding rules, or if the loose
object was written while core.legacyheaders = true. Newly
deltified loose objects are always compressed to the current
pack compression level.
Previously packed objects added to a pack are recompressed
to the current pack compression level exactly when their
deltification status changes, since the previous pack data
cannot be reused.
In either case, the --no-reuse-object switch from the first
patch below will always force recompression to the current pack
compression level, instead of assuming the pack compression level
hasn't changed and pack data can be reused when possible.
This applies on top of the following patches from Nicolas Pitre:
[PATCH] allow for undeltified objects not to be reused
[PATCH] make "repack -f" imply "pack-objects --no-reuse-object"
Signed-off-by: Dana L. How <danahow@gmail.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Now that we encourage and actively preserve objects in a packed form
more agressively than we did at the time the new loose object format and
core.legacyheaders were introduced, that extra loose object format
doesn't appear to be worth it anymore.
Because the packing of loose objects has to go through the delta match
loop anyway, and since most of them should end up being deltified in
most cases, there is really little advantage to have this parallel loose
object format as the CPU savings it might provide is rather lost in the
noise in the end.
This patch gets rid of core.legacyheaders, preserve the legacy format as
the only writable loose object format and deprecate the other one to
keep things simpler.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Currently non deltified object data is always reused when possible.
This means that any change to core.compression has no effect on those
objects as they don't get recompressed when repacking them.
Let's add a --no-reuse-object flag to git-repack in order to force
recompression of all objects when desired.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
If the progress bar ends up in a box, better provide a title for it too.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Instead of having this code duplicated in multiple places, let's have
a common interface for progress display. If someday someone wishes to
display a cheezy progress bar instead then only one file will have to
be changed.
Note: I left merge-recursive.c out since it has a strange notion of
progress as it apparently increase the expected total number as it goes.
Someone with more intimate knowledge of what that is supposed to mean
might look at converting it to the common progress interface.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>