type_cas_lock/unlock() should be defined as no-op for NO_PTHREADS
build, just like all the other locking primitives.
Signed-off-by: Etienne Buira <etienne.buira@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Teach "git fsck" to inspect the contents of annotated tag objects.
* js/fsck-tag-validation:
Make sure that index-pack --strict checks tag objects
Add regression tests for stricter tag fsck'ing
fsck: check tag objects' headers
Make sure fsck_commit_buffer() does not run out of the buffer
fsck_object(): allow passing object data separately from the object itself
Refactor type_from_string() to allow continuing after detecting an error
When receiving an invalid pack stream that records the same object
twice, multiple threads got confused due to a race. We should
reject or correct such a stream upon receiving, but that will be a
larger change.
* jk/index-pack-threading-races:
index-pack: fix race condition with duplicate bases
When fsck'ing an incoming pack, we need to fsck objects that cannot be
read via read_sha1_file() because they are not local yet (and might even
be rejected if transfer.fsckobjects is set to 'true').
For commits, there is a hack in place: we basically cache commit
objects' buffers anyway, but the same is not true, say, for tag objects.
By refactoring fsck_object() to take the object buffer and size as
optional arguments -- optional, because we still fall back to the
previous method to look at the cached commit objects if the caller
passes NULL -- we prepare the machinery for the upcoming handling of tag
objects.
The assumption that such buffers are inherently NUL terminated is now
wrong, of course, hence we pass the size of the buffer so that we can
add a sanity check later, to prevent running past the end of the buffer.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we are resolving deltas in an indexed pack, we do it by
first selecting a potential base (either one stored in full
in the pack, or one created by resolving another delta), and
then resolving any deltas that use that base. When we
resolve a particular delta, we flip its "real_type" field
from OBJ_{REF,OFS}_DELTA to whatever the real type is.
We assume that traversing the objects this way will visit
each delta only once. This is correct for most packs; we
visit the delta only when we process its base, and each
object (and thus each base) appears only once. However, if a
base object appears multiple times in the pack, we will try
to resolve any deltas based on it once for each instance.
We can detect this case by noting that a delta we are about
to resolve has already had its real_type field flipped, and
we already do so with an assert(). However, if multiple
threads are in use, we may race with another thread on
comparing and flipping the field. We need to synchronize the
access.
The right mechanism for doing this is a compare-and-swap (we
atomically "claim" the delta for our own and find out
whether our claim was successful). We can implement this
in C by using a pthread mutex to protect the operation. This
is not the fastest way of doing a compare-and-swap; many
processors provide instructions for this, and gcc and other
compilers provide builtins to access them. However, some
experiments showed that lock contention does not cause a
significant slowdown here. Adding c-a-s support for many
compilers would increase the maintenance burden (and we
would still end up including the pthread version as a
fallback).
Note that we only need to touch the OBJ_REF_DELTA codepath
here. An OBJ_OFS_DELTA object points to its base using an
offset, and therefore has only one base, even if another
copy of that base object appears in the pack (we do still
touch it briefly because the setting of real_type is
factored out of resolve_data).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Use xcalloc() instead of xmalloc() followed by memset() to allocate
and zero out memory because it's shorter and avoids duplicating the
function parameters.
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* jk/strip-suffix:
prepare_packed_git_one: refactor duplicate-pack check
verify-pack: use strbuf_strip_suffix
strbuf: implement strbuf_strip_suffix
index-pack: use strip_suffix to avoid magic numbers
use strip_suffix instead of ends_with in simple cases
replace has_extension with ends_with
implement ends_with via strip_suffix
add strip_suffix function
sha1_file: replace PATH_MAX buffer with strbuf in prepare_packed_git_one()
A handful of code paths had to read the commit object more than
once when showing header fields that are usually not parsed. The
internal data structure to keep track of the contents of the commit
object has been updated to reduce the need for this double-reading,
and to allow the caller find the length of the object.
* jk/commit-buffer-length:
reuse cached commit buffer when parsing signatures
commit: record buffer length in cache
commit: convert commit->buffer to a slab
commit-slab: provide a static initializer
use get_commit_buffer everywhere
convert logmsg_reencode to get_commit_buffer
use get_commit_buffer to avoid duplicate code
use get_cached_commit_buffer where appropriate
provide helpers to access the commit buffer
provide a helper to set the commit buffer
provide a helper to free commit buffer
sequencer: use logmsg_reencode in get_message
logmsg_reencode: return const buffer
do not create "struct commit" with xcalloc
commit: push commit_index update into alloc_commit_node
alloc: include any-object allocations in alloc_report
replace dangerous uses of strbuf_attach
commit_tree: take a pointer/len pair rather than a const strbuf
Move "commit->buffer" out of the in-core commit object and keep
track of their lengths. Use this to optimize the code paths to
validate GPG signatures in commit objects.
* jk/commit-buffer-length:
reuse cached commit buffer when parsing signatures
commit: record buffer length in cache
commit: convert commit->buffer to a slab
commit-slab: provide a static initializer
use get_commit_buffer everywhere
convert logmsg_reencode to get_commit_buffer
use get_commit_buffer to avoid duplicate code
use get_cached_commit_buffer where appropriate
provide helpers to access the commit buffer
provide a helper to set the commit buffer
provide a helper to free commit buffer
sequencer: use logmsg_reencode in get_message
logmsg_reencode: return const buffer
do not create "struct commit" with xcalloc
commit: push commit_index update into alloc_commit_node
alloc: include any-object allocations in alloc_report
replace dangerous uses of strbuf_attach
commit_tree: take a pointer/len pair rather than a const strbuf
We also switch to using strbufs, which lets us avoid the
potentially dangerous combination of a manual malloc
followed by a strcpy.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
These two are almost the same function, with the exception
that has_extension only matches if there is content before
the suffix. So ends_with(".exe", ".exe") is true, but
has_extension would not be.
This distinction does not matter to any of the callers,
though, and we can just replace uses of has_extension with
ends_with. We prefer the "ends_with" name because it is more
generic, and there is nothing about the function that
requires it to be used for file extensions.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git status", even though it is a read-only operation, tries to
update the index with refreshed lstat(2) info to optimize future
accesses to the working tree opportunistically, but this could
race with a "read-write" operation that modify the index while it
is running. Detect such a race and avoid overwriting the index.
* ym/fix-opportunistic-index-update-race:
read-cache.c: verify index file before we opportunistically update it
wrapper.c: add xpread() similar to xread()
The error reporting from "git index-pack" has been improved to
distinguish missing objects from type errors.
* jk/index-pack-report-missing:
index-pack: distinguish missing objects from type errors
We used to disable threaded "git index-pack" on platforms without
thread-safe pread(); use a different workaround for such
platforms to allow threaded "git index-pack".
* nd/index-pack-one-fd-per-thread:
index-pack: work around thread-unsafe pread()
Most callsites which use the commit buffer try to use the
cached version attached to the commit, rather than
re-reading from disk. Unfortunately, that interface provides
only a pointer to the NUL-terminated buffer, with no
indication of the original length.
For the most part, this doesn't matter. People do not put
NULs in their commit messages, and the log code is happy to
treat it all as a NUL-terminated string. However, some code
paths do care. For example, when checking signatures, we
want to be very careful that we verify all the bytes to
avoid malicious trickery.
This patch just adds an optional "size" out-pointer to
get_commit_buffer and friends. The existing callers all pass
NULL (there did not seem to be any obvious sites where we
could avoid an immediate strlen() call, though perhaps with
some further refactoring we could).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This converts two lines into one at each caller. But more
importantly, it abstracts the concept of freeing the buffer,
which will make it easier to change later.
Note that we also need to provide a "detach" mechanism for a
tricky case in index-pack. We are passed a buffer for the
object generated by processing the incoming pack. If we are
not using --strict, we just calculate the sha1 on that
buffer and return, leaving the caller to free it. But if we
are using --strict, we actually attach that buffer to an
object, pass the object to the fsck functions, and then
detach the buffer from the object again (so that the caller
can free it as usual). In this case, we don't want to free
the buffer ourselves, but just make sure it is no longer
associated with the commit.
Note that we are making the assumption here that the
attach/detach process does not impact the buffer at all
(e.g., it is never reallocated or modified). That holds true
now, and we have no plans to change that. However, as we
abstract the commit_buffer code, this dependency becomes
less obvious. So when we detach, let's also make sure that
we get back the same buffer that we gave to the
commit_buffer code.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Enable threaded index-pack on platforms without thread-unsafe
pread() emulation.
* nd/index-pack-one-fd-per-thread:
index-pack: work around thread-unsafe pread()
Read-only operations such as "git status" that internally refreshes
the index write out the refreshed index to the disk to optimize
future accesses to the working tree, but this could race with a
"read-write" operation that modify the index while it is running.
Detect such a race and avoid overwriting the index.
Duy raised a good point that we may need to do the same for the
normal writeout codepath, not just the "opportunistic" update
codepath. While that is true, nobody sane would be running two
simultaneous operations that are clearly write-oriented competing
with each other against the same index file. So in that sense that
can be done as a less urgent follow-up for this topic.
* ym/fix-opportunistic-index-update-race:
read-cache.c: verify index file before we opportunistically update it
wrapper.c: add xpread() similar to xread()
When we fetch a pack that does not contain an object we
expected to receive, we get an error like:
$ git init --bare tmp.git && cd tmp.git
$ git fetch ../parent.git
[...]
error: Could not read 964953ec7bcc0245cb1d0db4095455edd21a2f2e
fatal: Failed to traverse parents of commit b8247b40caf6704fe52736cdece6d6aae87471aa
error: ../parent.git did not send all necessary objects
This comes from the check_everything_connected rev-list. If
we try cloning the same repo (rather than a fetch), we end
up using index-pack's --check-self-contained-and-connected
option instead, which produces output like:
$ git clone --no-local --bare parent.git tmp.git
[...]
fatal: object of unexpected type
fatal: index-pack failed
Not only is the sha1 missing, but it's a misleading message.
There's no type problem, but rather a missing object
problem; we don't notice the difference because we simply
compare OBJ_BAD != OBJ_BLOB. Let's provide a different
message for this case:
$ git clone --no-local --bare parent.git tmp.git
fatal: did not receive expected object 6b00a8c61ed379d5f925a72c1987c9c52129d364
fatal: index-pack failed
While we're at it, let's also improve a true type mismatch
error to look like
fatal: object 6b00a8c61ed379d5f925a72c1987c9c52129d364: expected type blob, got tree
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Multi-threaing of index-pack was disabled with c0f8654
(index-pack: Disable threading on cygwin - 2012-06-26), because
pread() implementations for Cygwin and MSYS were not thread
safe. Recent Cygwin does offer usable pread() and we enabled
multi-threading with 103d530f (Cygwin 1.7 has thread-safe pread,
2013-07-19).
Work around this problem on platforms with a thread-unsafe
pread() emulation by opening one file handle per thread; it
would prevent parallel pread() on different file handles from
stepping on each other.
Also remove NO_THREAD_SAFE_PREAD that was introduced in c0f8654
because it's no longer used anywhere.
This workaround is unconditional, even for platforms with
thread-safe pread() because the overhead is small (a couple file
handles more) and not worth fragmenting the code.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Tested-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It is a common mistake to call read(2)/pread(2) and forget to
anticipate that they may return error with EAGAIN/EINTR when the
system call is interrupted.
We have xread() helper to relieve callers of read(2) from having to
worry about it; add xpread() helper to do the same for pread(2).
Update the caller in the builtin/index-pack.c and the mmap emulation
in compat/.
Signed-off-by: Yiannis Marangos <yiannis.marangos@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We feed a string pointer that is potentially NULL to die() when
showing the message. Don't.
Noticed-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The semantics of this flag was changed in commit
e1111cef23 inline lookup_replace_object() calls
but wasn't renamed at the time to minimize code churn. Rename it now,
and add a comment explaining its use.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Leaving only the function definitions and declarations so that any
new topic in flight can still make use of the old functions, replace
existing uses of the prefixcmp() and suffixcmp() with new API
functions.
The change can be recreated by mechanically applying this:
$ git grep -l -e prefixcmp -e suffixcmp -- \*.c |
grep -v strbuf\\.c |
xargs perl -pi -e '
s|!prefixcmp\(|starts_with\(|g;
s|prefixcmp\(|!starts_with\(|g;
s|!suffixcmp\(|ends_with\(|g;
s|suffixcmp\(|!ends_with\(|g;
'
on the result of preparatory changes in this series.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Many code paths will free a tree object's buffer and set it
to NULL after finishing with it in order to keep memory
usage down during a traversal. However, out of 8 sites that
do this, only one actually unsets the "parsed" flag back.
Those sites that don't are setting a trap for later users of
the tree object; even after calling parse_tree, the buffer
will remain NULL, causing potential segfaults.
It is not known whether this is triggerable in the current
code. Most commands do not do an in-memory traversal
followed by actually using the objects again. However, it
does not hurt to be safe for future callers.
In most cases, we can abstract this out to a
"free_tree_buffer" helper. However, there are two
exceptions:
1. The fsck code relies on the parsed flag to know that we
were able to parse the object at one point. We can
switch this to using a flag in the "flags" field.
2. The index-pack code sets the buffer to NULL but does
not free it (it is freed by a caller). We should still
unset the parsed flag here, but we cannot use our
helper, as we do not want to free the buffer.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In order to make sure the cloned repository is good, we run "rev-list
--objects --not --all $new_refs" on the repository. This is expensive
on large repositories. This patch attempts to mitigate the impact in
this special case.
In the "good" clone case, we only have one pack. If all of the
following are met, we can be sure that all objects reachable from the
new refs exist, which is the intention of running "rev-list ...":
- all refs point to an object in the pack
- there are no dangling pointers in any object in the pack
- no objects in the pack point to objects outside the pack
The second and third checks can be done with the help of index-pack as
a slight variation of --strict check (which introduces a new condition
for the shortcut: pack transfer must be used and the number of objects
large enough to call index-pack). The first is checked in
check_everything_connected after we get an "ok" from index-pack.
"index-pack + new checks" is still faster than the current "index-pack
+ rev-list", which is the whole point of this patch. If any of the
conditions fail, we fall back to the good old but expensive "rev-list
..". In that case it's even more expensive because we have to pay for
the new checks in index-pack. But that should only happen when the
other side is either buggy or malicious.
Cloning linux-2.6 over file://
before after
real 3m25.693s 2m53.050s
user 5m2.037s 4m42.396s
sys 0m13.750s 0m16.574s
A more realistic test with ssh:// over wireless
before after
real 11m26.629s 10m4.213s
user 5m43.196s 5m19.444s
sys 0m35.812s 0m37.630s
This shortcut is not applied to shallow clones, partly because shallow
clones should have no more objects than a usual fetch and the cost of
rev-list is acceptable, partly to avoid dealing with corner cases when
grafting is involved.
This shortcut does not apply to unpack-objects code path either
because the number of objects must be small in order to trigger that
code path.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"index-pack --verify-stat" used a few counters outside protection
of mutex, possibly showing incorrect numbers.
* nd/index-pack-threaded-fixes:
index-pack: guard nr_resolved_deltas reads by lock
index-pack: protect deepest_delta in multithread code
"index-pack --fix-thin" used uninitialize value to compute delta
depths of objects it appends to the resulting pack.
* jk/index-pack-correct-depth-fix:
index-pack: always zero-initialize object_entry list
Commit 38a4556 (index-pack: start learning to emulate
"verify-pack -v", 2011-06-03) added a "delta_depth" counter
to each "struct object_entry". Initially, all object entries
have their depth set to 0; in resolve_delta, we then set the
depth of each delta to "base + 1". Base entries never have
their depth touched, and remain at 0.
To ensure that all depths start at 0, that commit changed
calls to xmalloc the object_entry list into calls to
xcalloc. However, it forgot that we grow the list with
xrealloc later. These extra entries are used when we add an
object from elsewhere to complete a thin pack. If we add a
non-delta object, its depth value will just be uninitialized
heap data.
This patch fixes it by zero-initializing entries we add to
the objects list via the xrealloc.
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Thomas Rast <trast@student.ethz.ch>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The threaded parts of index-pack increment the number of resolved
deltas in nr_resolved_deltas guarded by counter_mutex. However, the
per-thread outer loop accessed nr_resolved_deltas without any locks.
This is not wrong as such, since it doesn't matter all that much
whether we get an outdated value. However, unless someone proves that
this one lock makes all the performance difference, it would be much
cleaner to guard _all_ accesses to the variable with the lock.
The only such use is display_progress() in the threaded section (all
others are in the conclude_pack() callchain outside the threaded
part). To make it obvious that it cannot deadlock, move it out of
work_mutex.
Signed-off-by: Thomas Rast <trast@student.ethz.ch>
Reviewed-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
deepest_delta is a global variable but is updated without protection
in resolve_delta(), a multithreaded function. Add a new mutex for it,
but only protect and update when it's actually used (i.e. show_stat is
non-zero).
Another variable that will not be updated is delta_depth in "struct
object_entry" as it's only useful when show_stat is 1. Putting it in
"if (show_stat)" makes it clearer.
The local variable "stat" is renamed to "show_stat" after moving to
global scope because the name "stat" conflicts with stat(2) syscall.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The translation of "completed with %d local objects" is put in a
48-byte buffer, which may be enough for English but not true for any
translations. Convert it to use strbuf (i.e. no hard limit on
translation length).
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The streaming index-pack introduced in 1.7.11 had a data corruption
bug, and this should fix it.
* jk/index-pack-streaming-fix:
index-pack: loop while inflating objects in unpack_data
When the unpack_data function is given a consume() callback,
it unpacks only 64K of the input at a time, feeding it to
git_inflate along with a 64K output buffer. However,
because we are inflating, there is a good chance that the
output buffer will fill before consuming all of the input.
In this case, we need to loop on git_inflate until we have
fed the whole input buffer, feeding each chunk of output to
the consume buffer.
The current code does not do this, and as a result, will
fail the loop condition and trigger a fatal "serious inflate
inconsistency" error in this case.
While we're rearranging the loop, let's get rid of the
extra last_out pointer. It is meant to point to the
beginning of the buffer that we feed to git_inflate, but in
practice this is always the beginning of our same 64K
buffer, because:
1. At the beginning of the loop, we are feeding the
buffer.
2. At the end of the loop, if we are using a consume()
function, we reset git_inflate's pointer to the
beginning of the buffer. If we are not using a
consume() function, then we do not care about the value
of last_out at all.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
On Cygwin, the platform pread(3) is not thread safe, just like our
own compat/ emulation, and cannot be used in the index-pack program.
* rj/platform-pread-may-be-thread-unsafe:
index-pack: Disable threading on cygwin
The Cygwin implementation of pread() is not thread-safe since, just
like the emulation provided by compat/pread.c, it uses a sequence of
seek-read-seek calls. In order to avoid failues due to thread-safety
issues, commit b038a61 disables threading when NO_PREAD is defined.
(ie when using the emulation code in compat/pread.c).
We introduce a new build variable, NO_THREAD_SAFE_PREAD, which allows
use to disable the threaded index-pack code on cygwin, in addition to
the above NO_PREAD case.
Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When putting whole objects in core is unavoidable, try match object
type and size first before actually inflating.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This allows caller to consume large inflated object with a fixed
amount of memory.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
unpack_raw_entry() will not allocate and return decompressed blobs if
they are larger than core.bigFileThreshold. sha1_object() may not be
called on those objects because there's no actual content.
sha1_object() is called later on those objects, where we can safely
use get_data_from_pack() to retrieve blob content for checking.
However we always do that when we definitely need the blob
content. And we often don't.
There are two cases when we may need object content. The first case is
when we find an in-repo blob with the same SHA-1. We need to do
collision test, byte-on-byte. If this test is on, the blob must be
loaded on memory (i.e. no streaming). Normally (e.g. in
fetch/pull/clone) this does not happen because git avoid to send
objects that client already has.
The other case is when --strict is specified and the object in
question is not a blob, which can't happen in reality becase we deal
with large _blobs_ here.
Note: --verify (or git-verify-pack) a pack from current repository
will trigger collision test on every object in the pack, which
effectively disables this patch. This could be easily worked around by
setting GIT_DIR to an imaginary place with no packs.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Enables threading in index-pack to resolve base data in parallel.
By Nguyễn Thái Ngọc Duy (3) and Ramsay Jones (1)
* nd/threaded-index-pack:
index-pack: disable threading if NO_PREAD is defined
index-pack: support multithreaded delta resolving
index-pack: restructure pack processing into three main functions
compat/win32/pthread.h: Add an pthread_key_delete() implementation
NO_PREAD simulates pread() as a sequence of seek, read, seek in
compat/pread.c. The simulation is not thread-safe because another
thread could move the file offset away in the middle of pread
operation. Do not allow threading in that case.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This puts delta resolving on each base on a separate thread, one base
cache per thread. Per-thread data is grouped in struct thread_local.
When running with nr_threads == 1, no pthreads calls are made. The
system essentially runs in non-thread mode.
An experiment on a Xeon 24 core machine with git.git shows that
performance does not increase proportional to the number of cores. So
by default, we use maximum 3 cores. Some numbers with --threads from 1
to 16:
1..4
real 0m8.003s 0m5.307s 0m4.321s 0m3.830s
user 0m7.720s 0m8.009s 0m8.133s 0m8.305s
sys 0m0.224s 0m0.372s 0m0.360s 0m0.360s
5..8
real 0m3.727s 0m3.604s 0m3.332s 0m3.369s
user 0m9.361s 0m9.817s 0m9.525s 0m9.769s
sys 0m0.584s 0m0.624s 0m0.540s 0m0.560s
9..12
real 0m3.036s 0m3.139s 0m3.177s 0m2.961s
user 0m8.977s 0m10.205s 0m9.737s 0m10.073s
sys 0m0.596s 0m0.680s 0m0.684s 0m0.680s
13..16
real 0m2.985s 0m2.894s 0m2.975s 0m2.971s
user 0m9.825s 0m10.573s 0m10.833s 0m11.361s
sys 0m0.788s 0m0.732s 0m0.904s 0m1.016s
On an Intel dual core and linux-2.6.git
1..4
real 2m37.789s 2m7.963s 2m0.920s 1m58.213s
user 2m28.415s 2m52.325s 2m50.176s 2m41.187s
sys 0m7.808s 0m11.181s 0m11.224s 0m10.731s
Thanks Ramsay Jones for troubleshooting and support on MinGW platform.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The second pass in parse_pack_objects() are split into
resolve_deltas(). The final phase, fixing thin pack or just seal the
pack, is now in conclude_pack() function. Main pack processing is now
a sequence of these functions:
- parse_pack_objects() reads through the input pack
- resolve_deltas() makes sure all deltas can be resolved
- conclude_pack() seals the output pack
- write_idx_file() writes companion index file
- final() moves the pack/index to proper place
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Revese the order of delta applying so that by the time a delta is
applied, its base is either non-delta or already inflated.
get_base_data() is still recursive, but because base's data is always
ready, the inner get_base_data() call never has any chance to call
itself again.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Current find_unresolved_deltas() links all bases together in a form of
tree, using struct base_data, with prev_base pointer to point to
parent node. Then it traverses down from parent to children in
recursive manner with all base_data allocated on stack.
To eliminate recursion, we simply need to put all on heap
(parse_pack_objects and fix_unresolved_deltas). After that, it's
simple non-recursive depth-first traversal loop. Each node also
maintains its own state (ofs and ref indices) to iterate over all
children nodes.
So we process one node:
- if it returns a new (child) node (a parent base), we link it to our
tree, then process the new node.
- if it returns nothing, the node is done, free it. We go back to
parent node and resume whatever it's doing.
and do it until we have no nodes to process.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
On Solaris the system headers define the "tmpfile" name, which'll
cause Git compiled with Sun Studio 12 Update 1 to whine about us
redefining the name:
"pack-write.c", line 76: warning: name redefined by pragma redefine_extname declared static: tmpfile (E_PRAGMA_REDEFINE_STATIC)
"sha1_file.c", line 2455: warning: name redefined by pragma redefine_extname declared static: tmpfile (E_PRAGMA_REDEFINE_STATIC)
"fast-import.c", line 858: warning: name redefined by pragma redefine_extname declared static: tmpfile (E_PRAGMA_REDEFINE_STATIC)
"builtin/index-pack.c", line 175: warning: name redefined by pragma redefine_extname declared static: tmpfile (E_PRAGMA_REDEFINE_STATIC)
Just renaming the "tmpfile" variable to "tmp_file" in the relevant
places is the easiest way to fix this.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When receive-pack & fetch-pack are run and store the pack obtained over
the wire to a local repository, they internally run the index-pack command
with the --strict option. Make sure that we reject incoming packfile that
records objects twice to avoid spreading such a damage.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* jc/index-pack:
verify-pack: use index-pack --verify
index-pack: show histogram when emulating "verify-pack -v"
index-pack: start learning to emulate "verify-pack -v"
index-pack: a miniscule refactor
index-pack --verify: read anomalous offsets from v2 idx file
write_idx_file: need_large_offset() helper function
index-pack: --verify
write_idx_file: introduce a struct to hold idx customization options
index-pack: group the delta-base array entries also by type
Conflicts:
builtin/verify-pack.c
cache.h
sha1_file.c
* jc/zlib-wrap:
zlib: allow feeding more than 4GB in one go
zlib: zlib can only process 4GB at a time
zlib: wrap deflateBound() too
zlib: wrap deflate side of the API
zlib: wrap inflateInit2 used to accept only for gzip format
zlib: wrap remaining calls to direct inflate/inflateEnd
zlib wrapper: refactor error message formatter
Conflicts:
sha1_file.c
The size of objects we read from the repository and data we try to put
into the repository are represented in "unsigned long", so that on larger
architectures we can handle objects that weigh more than 4GB.
But the interface defined in zlib.h to communicate with inflate/deflate
limits avail_in (how many bytes of input are we calling zlib with) and
avail_out (how many bytes of output from zlib are we ready to accept)
fields effectively to 4GB by defining their type to be uInt.
In many places in our code, we allocate a large buffer (e.g. mmap'ing a
large loose object file) and tell zlib its size by assigning the size to
avail_in field of the stream, but that will truncate the high octets of
the real size. The worst part of this story is that we often pass around
z_stream (the state object used by zlib) to keep track of the number of
used bytes in input/output buffer by inspecting these two fields, which
practically limits our callchain to the same 4GB limit.
Wrap z_stream in another structure git_zstream that can express avail_in
and avail_out in unsigned long. For now, just die() when the caller gives
a size that cannot be given to a single zlib call. In later patches in the
series, we would make git_inflate() and git_deflate() internally loop to
give callers an illusion that our "improved" version of zlib interface can
operate on a buffer larger than 4GB in one go.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Wrap deflateInit, deflate, and deflateEnd for everybody, and the sole use
of deflateInit2 in remote-curl.c to tell the library to use gzip header
and trailer in git_deflate_init_gzip().
There is only one caller that cares about the status from deflateEnd().
Introduce git_deflate_end_gently() to let that sole caller retrieve the
status and act on it (i.e. die) for now, but we would probably want to
make inflate_end/deflate_end die when they ran out of memory and get
rid of the _gently() kind.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The histogram produced by "verify-pack -v" always had an artificial
limit of 50, but index-pack knows what the maximum delta depth is, so
we do not have to limit it.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The "index-pack" machinery already has almost enough knowledge to produce
the same output as "verify-pack -v". Fill small gaps in its bookkeeping,
and teach it to show what it knows.
Add a few more command line options that do not have to be advertised to
the end users. They will be used internally when verify-pack calls this.
The eventual goal is to remove verify-pack implementation and redo it as a
thin wrapper around the index-pack, so that we can remove the rather
expensive packed_object_info_detail() API.
This still does not do the delta-chain-depth histogram yet but that part
is easy.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Introduce a helper function that takes the type of an object and
tell if it is a delta, as we seem to use this check in many places.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* load_file() returns a void pointer but is using 0 for the return
value
* builtin/receive-pack.c forgot to include builtin.h
* packet_trace_prefix can be marked static
* ll_merge takes a pointer for its last argument, not an int
* crc32 expects a pointer as the second argument but Z_NULL is defined
to be 0 (see 38f4d13 sparse fix: Using plain integer as NULL pointer,
2006-11-18 for more info)
Signed-off-by: Stephen Boyd <bebarino@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fix warnings from 'make check'.
- These files don't include 'builtin.h' causing sparse to complain that
cmd_* isn't declared:
builtin/clone.c:364, builtin/fetch-pack.c:797,
builtin/fmt-merge-msg.c:34, builtin/hash-object.c:78,
builtin/merge-index.c:69, builtin/merge-recursive.c:22
builtin/merge-tree.c:341, builtin/mktag.c:156, builtin/notes.c:426
builtin/notes.c:822, builtin/pack-redundant.c:596,
builtin/pack-refs.c:10, builtin/patch-id.c:60, builtin/patch-id.c:149,
builtin/remote.c:1512, builtin/remote-ext.c:240,
builtin/remote-fd.c:53, builtin/reset.c:236, builtin/send-pack.c:384,
builtin/unpack-file.c:25, builtin/var.c:75
- These files have symbols which should be marked static since they're
only file scope:
submodule.c:12, diff.c:631, replace_object.c:92, submodule.c:13,
submodule.c:14, trace.c:78, transport.c:195, transport-helper.c:79,
unpack-trees.c:19, url.c:3, url.c:18, url.c:104, url.c:117, url.c:123,
url.c:129, url.c:136, thread-utils.c:21, thread-utils.c:48
- These files redeclare symbols to be different types:
builtin/index-pack.c:210, parse-options.c:564, parse-options.c:571,
usage.c:49, usage.c:58, usage.c:63, usage.c:72
- These files use a literal integer 0 when they really should use a NULL
pointer:
daemon.c:663, fast-import.c:2942, imap-send.c:1072, notes-merge.c:362
While we're in the area, clean up some unused #includes in builtin files
(mostly exec_cmd.h).
Signed-off-by: Stephen Boyd <bebarino@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In a struct definitions, unlike functions, the prevailing style is for
the opening brace to go on the same line as the struct name, like so:
struct foo {
int bar;
char *baz;
};
Indeed, grepping for 'struct [a-z_]* {$' yields about 5 times as many
matches as 'struct [a-z_]*$'.
Linus sayeth:
Heretic people all over the world have claimed that this inconsistency
is ... well ... inconsistent, but all right-thinking people know that
(a) K&R are _right_ and (b) K&R are right.
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A pack v2 .idx file usually records offset using 64-bit representation
only when the offset does not fit within 31-bit, but you can handcraft
your .idx file to record smaller offset using 64-bit, storing all zero
in the upper 4-byte. By inspecting the original idx file when running
index-pack --verify, encode such low offsets that do not need to be in
64-bit but are encoded using 64-bit just like the original idx file so
that we can still validate the pack/idx pair by comparing the idx file
recomputed with the original.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Given an existing .pack file and the .idx file that describes it,
this new mode of operation reads and re-index the packfile and makes
sure the existing .idx file matches the result byte-for-byte.
All the objects in the .pack file are validated during this operation as
well. Unlike verify-pack, which visits each object described in the .idx
file in the SHA-1 order, index-pack efficiently exploits the delta-chain
to avoid rebuilding the objects that are used as the base of deltified
objects over and over again while validating the objects, resulting in
much quicker verification of the .pack file and its .idx file.
This version however cannot verify a .pack/.idx pair with a handcrafted v2
index that uses 64-bit offset representation for offsets that would fit
within 31-bit. You can create such an .idx file by giving a custom offset
to --index-version option to the command.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Remove two globals, pack_idx_default version and pack_idx_off32_limit,
and place them in a pack_idx_option structure. Allow callers to pass
it to write_idx_file() as a parameter.
Adjust all callers to the API change.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Entries in the delta_base array are only grouped by the bytepattern in
the delta_base union, some of which have 20-byte object name of the base
object (i.e. base for REF_DELTA objects), while others have sizeof(off_t)
bytes followed by enough NULs to fill 20-byte. The loops to iterate
through a range inside this array still needs to inspect the type of the
delta, and skip over false hits.
Group the entries also by type to eliminate the potential of false hits.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Remove some stray usage of other bracket types and asterisks for the
same purpose.
Signed-off-by: Štěpán Němec <stepnem@gmail.com>
Acked-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed integer overflow is not defined in C, so do not depend on it.
This fixes a problem with GCC 4.4.0 and -O3 where the optimizer would
consider "consumed_bytes > consumed_bytes + bytes" as a constant
expression, and never execute the die()-call.
Signed-off-by: Erik Faye-Lund <kusmabite@gmail.com>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* jn/paginate-fix:
t7006 (pager): add missing TTY prerequisites
merge-file: run setup_git_directory_gently() sooner
var: run setup_git_directory_gently() sooner
ls-remote: run setup_git_directory_gently() sooner
index-pack: run setup_git_directory_gently() sooner
config: run setup_git_directory_gently() sooner
bundle: run setup_git_directory_gently() sooner
apply: run setup_git_directory_gently() sooner
grep: run setup_git_directory_gently() sooner
shortlog: run setup_git_directory_gently() sooner
git wrapper: allow setup_git_directory_gently() be called earlier
setup: remember whether repository was found
git wrapper: introduce startup_info struct
Conflicts:
builtin/index-pack.c
* jn/maint-setup-fix:
setup: split off a function to handle ordinary .git directories
Revert "rehabilitate 'git index-pack' inside the object store"
setup: do not forget working dir from subdir of gitdir
t4111 (apply): refresh index before applying patches to it
setup: split off get_device_or_die helper
setup: split off a function to handle hitting ceiling in repo search
setup: split off code to handle stumbling upon a repository
setup: split off a function to checks working dir for .git file
setup: split off $GIT_DIR-set case from setup_git_directory_gently
tests: try git apply from subdir of toplevel
t1501 (rev-parse): clarify
Conflicts:
builtin/index-pack.c
index-pack already runs a repository search unconditionally; running
such a search earlier is not risky and ensures GIT_DIR will be set
correctly if the configuration needs to be accessed from
run_builtin().
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Without this, attempting to index a pack containing objects that have been
replaced results in a fatal error that looks like:
fatal: SHA1 COLLISION FOUND WITH <replaced-object> !
Signed-off-by: Nelson Elhage <nelhage@ksplice.com>
Acked-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Now setup_git_directory_gently behaves sanely even from subdirs of
.git, so simplify index-pack by no longer protecting against that.
This reverts commit a672ea6ac5
(excluding tests).
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This shrinks the top-level directory a bit, and makes it much more
pleasant to use auto-completion on the thing. Instead of
[torvalds@nehalem git]$ em buil<tab>
Display all 180 possibilities? (y or n)
[torvalds@nehalem git]$ em builtin-sh
builtin-shortlog.c builtin-show-branch.c builtin-show-ref.c
builtin-shortlog.o builtin-show-branch.o builtin-show-ref.o
[torvalds@nehalem git]$ em builtin-shor<tab>
builtin-shortlog.c builtin-shortlog.o
[torvalds@nehalem git]$ em builtin-shortlog.c
you get
[torvalds@nehalem git]$ em buil<tab> [type]
builtin/ builtin.h
[torvalds@nehalem git]$ em builtin [auto-completes to]
[torvalds@nehalem git]$ em builtin/sh<tab> [type]
shortlog.c shortlog.o show-branch.c show-branch.o show-ref.c show-ref.o
[torvalds@nehalem git]$ em builtin/sho [auto-completes to]
[torvalds@nehalem git]$ em builtin/shor<tab> [type]
shortlog.c shortlog.o
[torvalds@nehalem git]$ em builtin/shortlog.c
which doesn't seem all that different, but not having that annoying
break in "Display all 180 possibilities?" is quite a relief.
NOTE! If you do this in a clean tree (no object files etc), or using an
editor that has auto-completion rules that ignores '*.o' files, you
won't see that annoying 'Display all 180 possibilities?' message - it
will just show the choices instead. I think bash has some cut-off
around 100 choices or something.
So the reason I see this is that I'm using an odd editory, and thus
don't have the rules to cut down on auto-completion. But you can
simulate that by using 'ls' instead, or something similar.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>