* bc/maint-keep-pack:
t7700: test that 'repack -a' packs alternate packed objects
pack-objects: extend --local to mean ignore non-local loose objects too
sha1_file.c: split has_loose_object() into local and non-local counterparts
t7700: demonstrate mishandling of loose objects in an alternate ODB
builtin-gc.c: use new pack_keep bitfield to detect .keep file existence
repack: do not fall back to incremental repacking with [-a|-A]
repack: don't repack local objects in packs with .keep file
pack-objects: new option --honor-pack-keep
packed_git: convert pack_local flag into a bitfield and add pack_keep
t7700: demonstrate mishandling of objects in packs with a .keep file
* maint:
Start 1.6.0.5 cycle
Fix pack.packSizeLimit and --max-pack-size handling
checkout: Fix "initial checkout" detection
Remove the period after the git-check-attr summary
Conflicts:
RelNotes
If the limit was sufficiently low, having a single object written
could bust the limit (by design), but caused the remaining allowed
size to go negative for subsequent objects, which for an unsigned
variable is a rather huge limit.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
With this patch, --local means pack only local objects that are not already
packed.
Additionally, this fixes t7700 testing whether loose objects in an alternate
object database are repacked.
Signed-off-by: Brandon Casey <drafnel@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This adds a new option to pack-objects which will cause it to ignore an
object which appears in a local pack which has a .keep file, even if it
was specified for packing.
This option will be used by the porcelain repack.
Signed-off-by: Brandon Casey <casey@nrlssc.navy.mil>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When the pack data to be reused is found to be bad, let's fall back to
full object access through the generic path which has its own strategies
to find alternate object sources in that case. This allows for "fixing"
a corrupted pack simply by copying either another pack containing the
object(s) found to be bad, or the loose object itself, into the object
store and launch a repack without the need for -f.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It currently calls die() whenever given offset is not found thinking
that such thing should never happen. But this offset may come from a
corrupted pack whych _could_ happen and not be found. Callers should
deal with this possibility gracefully instead.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The check_object() function tries to get away with the least amount of
pack access possible when it already has partial information on given
object rather than calling the more costly packed_object_info().
When things don't look right, it should just give up and fall back to
packed_object_info() directly instead of die()'ing.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It is possible to have pack corruption in the object header. Currently
unpack_object_header() simply die() on them instead of letting the caller
deal with that gracefully.
So let's have unpack_object_header() return an error instead, and find
a better name for unpack_object_header_gently() in that context. All
callers of unpack_object_header() are ready for it.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In one case, it was possible to have a bad offset equal to 0 effectively
pointing a delta onto itself and crashing git after too many recursions.
In the other cases, a negative offset could result due to off_t being
signed. Catch those.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Abstract
--------
With index v2 we have a per object CRC to allow quick and safe reuse of
pack data when repacking. This, however, doesn't currently prevent a
stealth corruption from being propagated into a new pack when _not_
reusing pack data as demonstrated by the modification to t5302 included
here.
The Context
-----------
The Git database is all checksummed with SHA1 hashes. Any kind of
corruption can be confirmed by verifying this per object hash against
corresponding data. However this can be costly to perform systematically
and therefore this check is often not performed at run time when
accessing the object database.
First, the loose object format is entirely compressed with zlib which
already provide a CRC verification of its own when inflating data. Any
disk corruption would be caught already in this case.
Then, packed objects are also compressed with zlib but only for their
actual payload. The object headers and delta base references are not
deflated for obvious performance reasons, however this leave them
vulnerable to potentially undetected disk corruptions. Object types
are often validated against the expected type when they're requested,
and deflated size must always match the size recorded in the object header,
so those cases are pretty much covered as well.
Where corruptions could go unnoticed is in the delta base reference.
Of course, in the OBJ_REF_DELTA case, the odds for a SHA1 reference to
get corrupted so it actually matches the SHA1 of another object with the
same size (the delta header stores the expected size of the base object
to apply against) are virtually zero. In the OBJ_OFS_DELTA case, the
reference is a pack offset which would have to match the start boundary
of a different base object but still with the same size, and although this
is relatively much more "probable" than in the OBJ_REF_DELTA case, the
probability is also about zero in absolute terms. Still, the possibility
exists as demonstrated in t5302 and is certainly greater than a SHA1
collision, especially in the OBJ_OFS_DELTA case which is now the default
when repacking.
Again, repacking by reusing existing pack data is OK since the per object
CRC provided by index v2 guards against any such corruptions. What t5302
failed to test is a full repack in such case.
The Solution
------------
As unlikely as this kind of stealth corruption can be in practice, it
certainly isn't acceptable to propagate it into a freshly created pack.
But, because this is so unlikely, we don't want to pay the run time cost
associated with extra validation checks all the time either. Furthermore,
consequences of such corruption in anything but repacking should be rather
visible, and even if it could be quite unpleasant, it still has far less
severe consequences than actively creating bad packs.
So the best compromize is to check packed object CRC when unpacking
objects, and only during the compression/writing phase of a repack, and
only when not streaming the result. The cost of this is minimal (less
than 1% CPU time), and visible only with a full repack.
Someone with a stats background could provide an objective evaluation of
this, but I suspect that it's bad RAM that has more potential for data
corruptions at this point, even in those cases where this extra check
is not performed. Still, it is best to prevent a known hole for
corruption when recreating object data into a new pack.
What about the streamed pack case? Well, any client receiving a pack
must always consider that pack as untrusty and perform full validation
anyway, hence no such stealth corruption could be propagated to remote
repositoryes already. It is therefore worthless doing local validation
in that case.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* maint:
Start 1.6.0.4 cycle
add instructions on how to send patches to the mailing list with Gmail
Documentation/gitattributes: Add subsection header for each attribute
git send-email: avoid leaking directory file descriptors.
send-pack: do not send out single-level refs such as refs/stash
fix overlapping memcpy in normalize_absolute_path
pack-objects: avoid reading uninitalized data
correct cache_entry allocation
Conflicts:
RelNotes
In the main loop of find_deltas, we do:
struct object_entry *entry = *list++;
...
if (!*list_size)
...
break
Because we look at and increment *list _before_ the check of
list_size, in the very last iteration of the loop we will
look at uninitialized data, and increment the pointer beyond
one past the end of the allocated space. Since we don't
actually do anything with the data until after the check,
this is not a problem in practice.
But since it technically violates the C standard, and
because it provokes a spurious valgrind warning, let's just
move the initialization of entry to a safe place.
This fixes valgrind errors in t5300, t5301, t5302, t303, and
t9400.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Many call sites immediately initialize allocated memory with zero after
calling xmalloc. A single call to xcalloc can replace this two-call
sequence.
Signed-off-by: Brandon Casey <casey@nrlssc.navy.mil>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
A comment on top of create_tmpfile() describes caveats ('can have
problems on various systems (FAT, NFS, Coda)') that should apply
in this situation as well. This in the end did not end up solving
any of my personal problems, but it might be a useful cleanup patch
nevertheless.
Signed-off-by: Petr Baudis <pasky@suse.cz>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
User notifications are presented as 'git cmd', and code comments
are presented as '"cmd"' or 'git's cmd', rather than 'git-cmd'.
Signed-off-by: Heikki Orsila <heikki.orsila@iki.fi>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* np/maint-safer-pack:
fixup_pack_header_footer(): use nicely aligned buffer sizes
index-pack: use fixup_pack_header_footer()'s validation mode
pack-objects: use fixup_pack_header_footer()'s validation mode
improve reliability of fixup_pack_header_footer()
pack-objects: improve returned information from write_one()
This improves commit 6d6f9cddbe a bit by simply not including missing
bases in the list of objects to process at all.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* np/maint-safer-pack:
fixup_pack_header_footer(): use nicely aligned buffer sizes
index-pack: use fixup_pack_header_footer()'s validation mode
pack-objects: use fixup_pack_header_footer()'s validation mode
improve reliability of fixup_pack_header_footer()
pack-objects: improve returned information from write_one()
When limiting the pack size, a new header has to be written to the
pack and a new SHA1 computed. Make sure that the SHA1 of what is being
read back matches the SHA1 of what was written.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Currently, this function has the potential to read corrupted pack data
from disk and give it a valid SHA1 checksum. Let's add the ability to
validate SHA1 checksum of existing data along the way, including before
and after any arbitrary point in the pack.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This function returns 0 when the current object couldn't be written
due to the pack size limit, otherwise the current offset in the pack.
There is a problem with this approach however, since current object
could be a delta and its delta base might just have been written in
the same write_one() call, but those successfully written objects are
not accounted in the offset variable tracked by the caller. Currently
this is not an issue but a subsequent patch will need this.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The index-pack command, when processing a thin pack, fixed up the pack
after-the-fact. It forgets to fsync the result, because it only did that
in one path rather in all cases of fixup.
This moves the fsync_or_die() to the fix-up routine itself, rather than
doing it in one of the callers, so that all cases are covered.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If we are building a thin pack and one of the base objects we would
consider for deltification is missing its OK, the other side already
has that base object. We may be able to get a delta from another
object, or we can simply send the new object whole (no delta).
This change allows a shallow clone to store only the objects which
are unique to it, as well as the boundary commit and its trees, but
avoids storing the boundary blobs. This special form of a shallow
clone is able to represent just the difference between two trees.
Pack objects change suggested by Nicolas Pitre.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Acked-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When printing valuds of type uint32_t, we should use PRIu32, and should
not assume that it is unsigned int. On 32-bit platforms, it could be
defined as unsigned long. The same caution applies to ntohl().
Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
To do so, check_pack_crc() moved from builtin-pack-objects.c to
pack-check.c where it is more logical to share.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This makes life much easier for next patch, as well as being more efficient
when the revindex is actually not used.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since the pack-files are now always created stably on disk, there is no
need to sync() before pruning lose objects or old stale pack-files.
[jc: with Nico's clean-up]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This means that we can depend on packs always being stable on disk,
simplifying a lot of the object serialization worries. And unlike loose
objects, serializing pack creation IO isn't going to be a performance
killer.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* bc/repack:
Documentation/git-repack.txt: document new -A behaviour
let pack-objects do the writing of unreachable objects as loose objects
add a force_object_loose() function
builtin-gc.c: deprecate --prune, it now really has no effect
git-gc: always use -A when manually repacking
repack: modify behavior of -A option to leave unreferenced objects unpacked
Conflicts:
builtin-pack-objects.c
git_config() only had a function parameter, but no callback data
parameter. This assumes that all callback functions only modify
global variables.
With this patch, every callback gets a void * parameter, and it is hoped
that this will help the libification effort.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit ccc1297226 changed the behavior
of 'git repack -A' so unreachable objects are stored as loose objects.
However it did so in a naive and inn efficient way by making packs
about to be deleted inaccessible and feeding their content through
'git unpack-objects'. While this works, there are major flaws with
this approach:
- It is unacceptably sloooooooooooooow.
In the Linux kernel repository with no actual unreachable objects,
doing 'git repack -A -d' before:
real 2m33.220s
user 2m21.675s
sys 0m3.510s
And with this change:
real 0m36.849s
user 0m24.365s
sys 0m1.950s
For reference, here's the timing for 'git repack -a -d':
real 0m35.816s
user 0m22.571s
sys 0m2.011s
This is explained by the fact that 'git unpack-objects' was used to
unpack _every_ objects even if (almost) 100% of them were thrown away.
- There is a black out period.
Between the removal of the .idx file for the redundant pack and the
completion of its unpacking, the unreachable objects become completely
unaccessible. This is not a big issue as we're talking about unreachable
objects, but some consistency is always good.
- There is no way to easily set a sensible mtime for the newly created
unreachable loose objects.
So, while having a command called "pack-objects" to perform object
unpacking looks really odd, this is probably the best compromize to be
able to solve the above issues in an efficient way.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The 'depth' variable doesn't reflect the actual maximum depth used
when other objects already depend on the current one.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When the delta data is cached in memory until it is written to a pack
file on disk, it is best to compress it right away in find_deltas() for
the following reasons:
- we have to compress that data anyway;
- this allows for caching more deltas with the same cache size limit;
- compression is potentially threaded.
This last point is especially relevant for SMP run time. For example,
repacking the Linux repo on a quad core processor using 4 threads with
all default settings produce the following results before this change:
real 2m27.929s
user 4m36.492s
sys 0m3.091s
And with this change applied:
real 2m13.787s
user 4m37.486s
sys 0m3.159s
So the actual execution time stayed more or less the same but the
wall clock time is shorter.
This is however not a good thing to do when generating a pack for
network transmission. In that case, the network is most likely to
throttle the data throughput, so it is best to make find_deltas()
faster in order to start writing data ASAP since we can afford
spending more time between writes to compress the data
at that point.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Parsing !no_reuse_delta everywhere makes my brain spend extra
cycles wondering each time.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Runtime pack access is done in the pack file mtime order since recent
packs are more likely to contain frequently used objects than old packs.
However the --max-pack-size option can produce multiple packs with mtime
in the reversed order as newer objects are always written first.
Let's modify mtime of later pack files (when any) so they appear older
than preceding ones when a repack creates multiple packs.
Signed-off-by: Nicolas Pitre <nico@cam.org>
The new option "--include-tag" allows the caller to request that
any annotated tag be included into the packfile if the object the tag
references was also included as part of the packfile.
This option can be useful on the server side of a native git transport,
where the server knows what commits it is including into a packfile to
update the client. If new annotated tags have been introduced then we
can also include them in the packfile, saving the client from needing
to request them through a second connection.
This change only introduces the backend option and provides a test.
Protocol extensions to make this useful in fetch-pack/upload-pack
are still necessary to activate the logic during transport.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* np/verify-pack:
add storage size output to 'git verify-pack -v'
fix unimplemented packed_object_info_detail() features
make verify_one_pack() a bit less wrong wrt packed_git structure
factorize revindex code out of builtin-pack-objects.c
Conflicts:
Makefile
* mk/maint-parse-careful:
receive-pack: use strict mode for unpacking objects
index-pack: introduce checking mode
unpack-objects: prevent writing of inconsistent objects
unpack-object: cache for non written objects
add common fsck error printing function
builtin-fsck: move common object checking code to fsck.c
builtin-fsck: reports missing parent commits
Remove unused object-ref code
builtin-fsck: move away from object-refs to fsck_walk
add generic, type aware object chain walker
Conflicts:
Makefile
builtin-fsck.c
No functional change. This is needed to fix verify-pack in a later patch.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This reverts commit 6c723f5e6b.
The additional message may be interesting for git developers,
but not useful for the end users, and clutters the output.
Packing objects can be done in parallell nowadays, but it's
only done if the config option pack.threads is set to a value
above 1. Because of that, the code-path used is often not the
most optimal one.
This patch adds a routine to detect the number of online CPU's
at runtime (online_cpus()). When pack.threads (or --threads=) is
given a value of 0, the number of threads is set to the number of
online CPU's. This feature is also documented.
As per Nicolas Pitre's recommendations, the default is still to
run pack-objects single-threaded unless explicitly activated,
either by configuration or by command line parameter.
The routine online_cpus() is a rework of "numcpus.c", written by
one Philip Willoughby <pgw99@doc.ic.ac.uk>. numcpus.c is in the
public domain and can presently be downloaded from
http://csgsoft.doc.ic.ac.uk/numcpus/
Signed-off-by: Andreas Ericsson <ae@op5.se>
Signed-off-by: Brandon Casey <casey@nrlssc.navy.mil>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This change removes all obvious useless if-before-free tests.
E.g., it replaces code like this:
if (some_expression)
free (some_expression);
with the now-equivalent:
free (some_expression);
It is equivalent not just because POSIX has required free(NULL)
to work for a long time, but simply because it has worked for
so long that no reasonable porting target fails the test.
Here's some evidence from nearly 1.5 years ago:
http://www.winehq.org/pipermail/wine-patches/2006-October/031544.html
FYI, the change below was prepared by running the following:
git ls-files -z | xargs -0 \
perl -0x3b -pi -e \
's/\bif\s*\(\s*(\S+?)(?:\s*!=\s*NULL)?\s*\)\s+(free\s*\(\s*\1\s*\))/$2/s'
Note however, that it doesn't handle brace-enclosed blocks like
"if (x) { free (x); }". But that's ok, since there were none like
that in git sources.
Beware: if you do use the above snippet, note that it can
produce syntactically invalid C code. That happens when the
affected "if"-statement has a matching "else".
E.g., it would transform this
if (x)
free (x);
else
foo ();
into this:
free (x);
else
foo ();
There were none of those here, either.
If you're interested in automating detection of the useless
tests, you might like the useless-if-before-free script in gnulib:
[it *does* detect brace-enclosed free statements, and has a --name=S
option to make it detect free-like functions with different names]
http://git.sv.gnu.org/gitweb/?p=gnulib.git;a=blob;f=build-aux/useless-if-before-free
Addendum:
Remove one more (in imap-send.c), spotted by Jean-Luc Herren <jlh@gmx.ch>.
Signed-off-by: Jim Meyering <meyering@redhat.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A failure in prepare_revision_walk can be caused by
a not parseable object.
Signed-off-by: Martin Koegler <mkoegler@auto.tuwien.ac.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* maint:
config: add test cases for empty value and no value config variables.
cvsimport: have default merge regex also match beginning of commit message
git clone -s documentation: force a new paragraph for the NOTE
status: suggest "git rm --cached" to unstage for initial commit
Protect get_author_ident_from_commit() from filenames in work tree
upload-pack: Initialize the exec-path.
bisect: use verbatim commit subject in the bisect log
git-cvsimport.txt: fix '-M' description.
Revert "pack-objects: only throw away data during memory pressure"
This reverts commit 9c2174350c.
Nico analyzed and found out that this does not really help, and
I agree with it.
By the time this gets into action and data is actively thrown
away, performance simply goes down the drain due to the data
constantly being reloaded over and over and over and over and
over and over again, to the point of virtually making no
relative progress at all. The previous behavior of enforcing
the memory limit by dynamically shrinking the window size at
least had the effect of allowing some kind of progress, even if
the end result wouldn't be optimal.
And that's the whole point behind this memory limiting feature:
allowing some progress to be made when resources are too limited
to let the repack go unbounded.
If pack-objects hit the memory limit, it deletes objects from the delta
window.
This patch make it only delete the data, which is recomputed, if needed again.
Signed-off-by: Martin Koegler <mkoegler@auto.tuwien.ac.at>
Acked-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git pack-objects" has the option --max-pack-size to limit the file
size of the packs to a certain amount of bytes. On platforms where
the pack file size is limited by filesystem constraints, it is easy
to forget this option, and this option does not exist for "git gc"
to begin with.
So introduce a config variable to set the default maximum, but make
this overrideable by the command line.
Suggested by Tor Arvid Lund.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When partitioning the work amongst threads, dividing the number of
objects by the number of threads may return 0 when there are less
objects than threads; this will cause the subsequent code to segfault
when accessing list[sub_size-1]. Allow some threads to have
zero objects to work on instead of barfing, while letting others
to have more.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We somehow called deflateEnd() on a stream that we have called
deflateEnd() on already.
In fact, the second deflateEnd() has always been returning
Z_STREAM_ERROR. We just never checked the error return from
that particular deflateEnd().
The first one returns 0 for success. We might want to tighten
the check even more to check that.
Noticed by Marco.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A NUL byte at beginning of file, or just after a newline
would provoke an invalid buf[-1] access in a few places.
* builtin-grep.c (cmd_grep): Don't access buf[-1].
* builtin-pack-objects.c (get_object_list): Likewise.
* builtin-rev-list.c (read_revisions_from_stdin): Likewise.
* bundle.c (read_bundle_header): Likewise.
* server-info.c (read_pack_info_file): Likewise.
* transport.c (insert_packed_refs): Likewise.
Signed-off-by: Jim Meyering <meyering@redhat.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A mutex and a condition variable is allocated for each thread and torn
down when the thread terminates. However, for certain workloads it can
happen that some threads are actually not started at all. In this case
we would leak the mutex and condition variable. Now we allocate them only
for those threads that are actually started.
Signed-off-by: Johannes Sixt <johannes.sixt@telecom.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the threaded pack-objects code the main thread and the worker threads
must mutually signal that they have assigned a new pack of work or have
completed their work, respectively. Previously, the code used mutexes that
were locked in one thread and unlocked from a different thread, which is
bogus (and happens to work on Linux).
Here we rectify the implementation by using condition variables: There is
one condition variable on which the main thread waits until a thread
requests new work; and each worker thread has its own condition variable
on which it waits until it is assigned new work or signaled to terminate.
As a cleanup, the worker threads are spawned only after the initial work
packages have been assigned.
Signed-off-by: Johannes Sixt <johannes.sixt@telecom.at>
Acked-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The code that splits the object list amongst work threads tries to do so
on "path" boundaries not to prevent good delta matches. However, in
some cases, a few paths may largely dominate the hash distribution and
it is not possible to have good load balancing without ignoring those
boundaries.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The current method consists of a master thread serving chunks of objects
to work threads when they're done with their previous chunk. The issue
is to determine the best chunk size: making it too large creates poor
load balancing, while making it too small has a negative effect on pack
size because of the increased number of chunk boundaries and poor delta
window utilization.
This patch implements a completely different approach by initially
splitting the work in large chunks uniformly amongst all threads, and
whenever a thread is done then it steals half of the remaining work from
another thread with the largest amount of unprocessed objects.
This has the advantage of greatly reducing the number of chunk boundaries
with an almost perfect load balancing.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It is currently sorted and then walked backward. Not only this doesn't
feel natural for my poor brain, but it would make the next patch less
obvious as well.
So reverse the sort order, and reverse the list walking direction,
which effectively produce the exact same end result as before.
Also bring the relevant comment nearer the actual code and adjust it
accordingly, with minor additional clarifications.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The wrong value was substracted from delta_cache_size when replacing
a cached delta, as trg_entry->delta_size was used after the old size
had been replaced by the new size.
Noticed by Linus.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The function mark_tree_uninteresting() assumed that the tree entries
are blob when they are not trees. This is not so. Since we do
not traverse into submodules (yet), the gitlinks should be ignored.
In general, we should try to start moving away from using the
"S_ISLNK()" like things for internal git state. It was a mistake to
just assume the numbers all were same across all systems in the first
place. This implementation converts to the "object_type", and then
uses a case statement.
Noticed by Ilari on IRC.
Test script taken from an earlier version by Dscho.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It is a good idea to use pack index version 2 all the time since it has
proper protection against propagation of certain pack corruptions when
repacking which is not possible with index version 1, as demonstrated
in test t5302.
Hence this config option.
The default is still pack index version 1.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This one triggers only when git-pack-objects is called with
--all-progress and --stdout which is the combination used by
git-push.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since it is now OK to pass a null pointer to display_progress() and
stop_progress() resulting in a no-op, then we can simplify the code
and remove a bunch of lines by not making those calls conditional all
the time.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This allows for better management of progress "object" existence,
as well as making the progress display implementation more independent
from its callers.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Recently I was referred to the Grammar Police as the git-pack-objects
progress message 'Deltifying %u objects' is considered to be not
proper English to at least some small but vocal segment of the
English speaking population. Techncially we are applying delta
compression to these objects at this stage, so the new term is
slightly more acceptable to the Grammar Police but is also just
as correct.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Two functions, namely write_idx_file() and open_pack_file(), currently
return a const pointer. However that pointer is either a copy of the
first argument, or set to a malloc'd buffer when that first argument
is null. In the later case it is wrong to qualify that pointer as const
since ownership of the buffer is transferred to the caller to dispose of,
and obviously the free() function is not meant to be passed const
pointers.
Making the return pointer not const causes a warning when the first
argument is returned since that argument is also marked const.
The correct thing to do is therefore to remove the const qualifiers,
avoiding the need for ugly casts only to silence some warnings.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
To keep things well layered, sha1close() now returns the file descriptor
when it doesn't close the file.
An ugly cast was added to the return of write_idx_file() to avoid a
warning. A proper fix will come separately.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
... so don't even try in that case, and save another useless line of
progress display.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Each progress can be on a single line instead of two.
[sp: Changed "Checking files out" to "Checking out files" at
Johannes Sixt's suggestion as it better explains the
action that is taking place]
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* jc/autogc:
git-gc --auto: run "repack -A -d -l" as necessary.
git-gc --auto: restructure the way "repack" command line is built.
git-gc --auto: protect ourselves from accumulated cruft
git-gc --auto: add documentation.
git-gc --auto: move threshold check to need_to_gc() function.
repack -A -d: use --keep-unreachable when repacking
pack-objects --keep-unreachable
Export matches_pack_name() and fix its return value
Invoke "git gc --auto" from commit, merge, am and rebase.
Implement git gc --auto
This new option is meant to be used in conjunction with the
options "git repack -a -d" usually invokes the underlying
pack-objects with. When this option is given, objects unreachable
from the refs in packs named with --unpacked= option are added
to the resulting pack, in addition to the reachable objects that
are not in packs marked with *.keep files.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This adds a --threads=<n> parameter to 'git pack-objects' with
documentation.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Try to keep object with the same name hash together.
Suggested by Martin Koegler.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
With this, each thread get repeatedly assigned the next available chunk of
objects to process until the whole list is done. The idea is to have
reasonably small chunks so that all CPUs remain busy with a minimum
number of threads for as long as there is data to process.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
this is still rough, hence it is disabled by default. You need to compile
with "make THREADED_DELTA_SEARCH=1 ..." at the moment.
Threading is done on different portions of the object list to be
deltified. This is currently done by spliting the list into n parts and
then a thread is spawned for each of them. A better method would consist
of spliting the list into more smaller parts and have the n threads
pick the next part available.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is to help threadification of the delta search code, with a bonus
consistency check.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Not all objects are subject to deltification, so avoid carrying those
along, and provide the real count to progress display.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is based on Martin Koegler's idea to keep the object that
was successfully used as the base of the delta when it is about
to fall off the edge of the window. Instead of doing so only
for the objects at the edge of the window, this makes the window
a lru eviction mechanism. If an entry is used as a base, it is
moved to the last of the queue to be evicted.
This is a quick-and-dirty implementation, as it keeps the original
implementation of the data structure used for the window. This
originally was done as an array, not as an array of pointers,
because it was meant to be used as a cyclic FIFO buffer and a
plain array avoids an extra pointer indirection, while its FIFOness
eant that we are not "moving" the entries like this patch does.
The runtime from three versions were comparable. It seems to
make the resulting chain even shorter, which can only be good.
(stock "master") 15782196 bytes
chain length = 1: 2972 objects
chain length = 2: 2651 objects
chain length = 3: 2369 objects
chain length = 4: 2121 objects
chain length = 5: 1877 objects
...
chain length = 46: 490 objects
chain length = 47: 515 objects
chain length = 48: 527 objects
chain length = 49: 570 objects
chain length = 50: 408 objects
(with your patch) 15745736 bytes (0.23% smaller)
chain length = 1: 3137 objects
chain length = 2: 2688 objects
chain length = 3: 2322 objects
chain length = 4: 2146 objects
chain length = 5: 1824 objects
...
chain length = 46: 503 objects
chain length = 47: 509 objects
chain length = 48: 536 objects
chain length = 49: 588 objects
chain length = 50: 357 objects
(with this patch) 15612086 bytes (1.08% smaller)
chain length = 1: 4831 objects
chain length = 2: 3811 objects
chain length = 3: 2964 objects
chain length = 4: 2352 objects
chain length = 5: 1944 objects
...
chain length = 46: 327 objects
chain length = 47: 353 objects
chain length = 48: 304 objects
chain length = 49: 298 objects
chain length = 50: 135 objects
[jc: this is with code simplification follow-up from Nico]
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The code favoring shallower deltas when size is equal was triggered
only when previous delta was also cached. There should be no relation
between cached deltas and same sized deltas.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When a thin pack wants to send a tree object at "sub/dir", and
the commit that is common between the sender and the receiver
that is used as the base object has a subproject at that path,
we should not try to use the data at "sub/dir" of the base tree
as a tree object. It is not a tree to begin with, and more
importantly, the commit object there does not have to even
exist.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Not only are they unused, but the order in the function declaration
and the actual usage don't match.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>