A comment on top of create_tmpfile() describes caveats ('can have
problems on various systems (FAT, NFS, Coda)') that should apply
in this situation as well. This in the end did not end up solving
any of my personal problems, but it might be a useful cleanup patch
nevertheless.
Signed-off-by: Petr Baudis <pasky@suse.cz>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
User notifications are presented as 'git cmd', and code comments
are presented as '"cmd"' or 'git's cmd', rather than 'git-cmd'.
Signed-off-by: Heikki Orsila <heikki.orsila@iki.fi>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* np/maint-safer-pack:
fixup_pack_header_footer(): use nicely aligned buffer sizes
index-pack: use fixup_pack_header_footer()'s validation mode
pack-objects: use fixup_pack_header_footer()'s validation mode
improve reliability of fixup_pack_header_footer()
pack-objects: improve returned information from write_one()
When limiting the pack size, a new header has to be written to the
pack and a new SHA1 computed. Make sure that the SHA1 of what is being
read back matches the SHA1 of what was written.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Currently, this function has the potential to read corrupted pack data
from disk and give it a valid SHA1 checksum. Let's add the ability to
validate SHA1 checksum of existing data along the way, including before
and after any arbitrary point in the pack.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This function returns 0 when the current object couldn't be written
due to the pack size limit, otherwise the current offset in the pack.
There is a problem with this approach however, since current object
could be a delta and its delta base might just have been written in
the same write_one() call, but those successfully written objects are
not accounted in the offset variable tracked by the caller. Currently
this is not an issue but a subsequent patch will need this.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The index-pack command, when processing a thin pack, fixed up the pack
after-the-fact. It forgets to fsync the result, because it only did that
in one path rather in all cases of fixup.
This moves the fsync_or_die() to the fix-up routine itself, rather than
doing it in one of the callers, so that all cases are covered.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If we are building a thin pack and one of the base objects we would
consider for deltification is missing its OK, the other side already
has that base object. We may be able to get a delta from another
object, or we can simply send the new object whole (no delta).
This change allows a shallow clone to store only the objects which
are unique to it, as well as the boundary commit and its trees, but
avoids storing the boundary blobs. This special form of a shallow
clone is able to represent just the difference between two trees.
Pack objects change suggested by Nicolas Pitre.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Acked-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When printing valuds of type uint32_t, we should use PRIu32, and should
not assume that it is unsigned int. On 32-bit platforms, it could be
defined as unsigned long. The same caution applies to ntohl().
Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
To do so, check_pack_crc() moved from builtin-pack-objects.c to
pack-check.c where it is more logical to share.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This makes life much easier for next patch, as well as being more efficient
when the revindex is actually not used.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since the pack-files are now always created stably on disk, there is no
need to sync() before pruning lose objects or old stale pack-files.
[jc: with Nico's clean-up]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This means that we can depend on packs always being stable on disk,
simplifying a lot of the object serialization worries. And unlike loose
objects, serializing pack creation IO isn't going to be a performance
killer.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* bc/repack:
Documentation/git-repack.txt: document new -A behaviour
let pack-objects do the writing of unreachable objects as loose objects
add a force_object_loose() function
builtin-gc.c: deprecate --prune, it now really has no effect
git-gc: always use -A when manually repacking
repack: modify behavior of -A option to leave unreferenced objects unpacked
Conflicts:
builtin-pack-objects.c
git_config() only had a function parameter, but no callback data
parameter. This assumes that all callback functions only modify
global variables.
With this patch, every callback gets a void * parameter, and it is hoped
that this will help the libification effort.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit ccc1297226 changed the behavior
of 'git repack -A' so unreachable objects are stored as loose objects.
However it did so in a naive and inn efficient way by making packs
about to be deleted inaccessible and feeding their content through
'git unpack-objects'. While this works, there are major flaws with
this approach:
- It is unacceptably sloooooooooooooow.
In the Linux kernel repository with no actual unreachable objects,
doing 'git repack -A -d' before:
real 2m33.220s
user 2m21.675s
sys 0m3.510s
And with this change:
real 0m36.849s
user 0m24.365s
sys 0m1.950s
For reference, here's the timing for 'git repack -a -d':
real 0m35.816s
user 0m22.571s
sys 0m2.011s
This is explained by the fact that 'git unpack-objects' was used to
unpack _every_ objects even if (almost) 100% of them were thrown away.
- There is a black out period.
Between the removal of the .idx file for the redundant pack and the
completion of its unpacking, the unreachable objects become completely
unaccessible. This is not a big issue as we're talking about unreachable
objects, but some consistency is always good.
- There is no way to easily set a sensible mtime for the newly created
unreachable loose objects.
So, while having a command called "pack-objects" to perform object
unpacking looks really odd, this is probably the best compromize to be
able to solve the above issues in an efficient way.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The 'depth' variable doesn't reflect the actual maximum depth used
when other objects already depend on the current one.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When the delta data is cached in memory until it is written to a pack
file on disk, it is best to compress it right away in find_deltas() for
the following reasons:
- we have to compress that data anyway;
- this allows for caching more deltas with the same cache size limit;
- compression is potentially threaded.
This last point is especially relevant for SMP run time. For example,
repacking the Linux repo on a quad core processor using 4 threads with
all default settings produce the following results before this change:
real 2m27.929s
user 4m36.492s
sys 0m3.091s
And with this change applied:
real 2m13.787s
user 4m37.486s
sys 0m3.159s
So the actual execution time stayed more or less the same but the
wall clock time is shorter.
This is however not a good thing to do when generating a pack for
network transmission. In that case, the network is most likely to
throttle the data throughput, so it is best to make find_deltas()
faster in order to start writing data ASAP since we can afford
spending more time between writes to compress the data
at that point.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Parsing !no_reuse_delta everywhere makes my brain spend extra
cycles wondering each time.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Runtime pack access is done in the pack file mtime order since recent
packs are more likely to contain frequently used objects than old packs.
However the --max-pack-size option can produce multiple packs with mtime
in the reversed order as newer objects are always written first.
Let's modify mtime of later pack files (when any) so they appear older
than preceding ones when a repack creates multiple packs.
Signed-off-by: Nicolas Pitre <nico@cam.org>
The new option "--include-tag" allows the caller to request that
any annotated tag be included into the packfile if the object the tag
references was also included as part of the packfile.
This option can be useful on the server side of a native git transport,
where the server knows what commits it is including into a packfile to
update the client. If new annotated tags have been introduced then we
can also include them in the packfile, saving the client from needing
to request them through a second connection.
This change only introduces the backend option and provides a test.
Protocol extensions to make this useful in fetch-pack/upload-pack
are still necessary to activate the logic during transport.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* np/verify-pack:
add storage size output to 'git verify-pack -v'
fix unimplemented packed_object_info_detail() features
make verify_one_pack() a bit less wrong wrt packed_git structure
factorize revindex code out of builtin-pack-objects.c
Conflicts:
Makefile
* mk/maint-parse-careful:
receive-pack: use strict mode for unpacking objects
index-pack: introduce checking mode
unpack-objects: prevent writing of inconsistent objects
unpack-object: cache for non written objects
add common fsck error printing function
builtin-fsck: move common object checking code to fsck.c
builtin-fsck: reports missing parent commits
Remove unused object-ref code
builtin-fsck: move away from object-refs to fsck_walk
add generic, type aware object chain walker
Conflicts:
Makefile
builtin-fsck.c
No functional change. This is needed to fix verify-pack in a later patch.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This reverts commit 6c723f5e6b.
The additional message may be interesting for git developers,
but not useful for the end users, and clutters the output.
Packing objects can be done in parallell nowadays, but it's
only done if the config option pack.threads is set to a value
above 1. Because of that, the code-path used is often not the
most optimal one.
This patch adds a routine to detect the number of online CPU's
at runtime (online_cpus()). When pack.threads (or --threads=) is
given a value of 0, the number of threads is set to the number of
online CPU's. This feature is also documented.
As per Nicolas Pitre's recommendations, the default is still to
run pack-objects single-threaded unless explicitly activated,
either by configuration or by command line parameter.
The routine online_cpus() is a rework of "numcpus.c", written by
one Philip Willoughby <pgw99@doc.ic.ac.uk>. numcpus.c is in the
public domain and can presently be downloaded from
http://csgsoft.doc.ic.ac.uk/numcpus/
Signed-off-by: Andreas Ericsson <ae@op5.se>
Signed-off-by: Brandon Casey <casey@nrlssc.navy.mil>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This change removes all obvious useless if-before-free tests.
E.g., it replaces code like this:
if (some_expression)
free (some_expression);
with the now-equivalent:
free (some_expression);
It is equivalent not just because POSIX has required free(NULL)
to work for a long time, but simply because it has worked for
so long that no reasonable porting target fails the test.
Here's some evidence from nearly 1.5 years ago:
http://www.winehq.org/pipermail/wine-patches/2006-October/031544.html
FYI, the change below was prepared by running the following:
git ls-files -z | xargs -0 \
perl -0x3b -pi -e \
's/\bif\s*\(\s*(\S+?)(?:\s*!=\s*NULL)?\s*\)\s+(free\s*\(\s*\1\s*\))/$2/s'
Note however, that it doesn't handle brace-enclosed blocks like
"if (x) { free (x); }". But that's ok, since there were none like
that in git sources.
Beware: if you do use the above snippet, note that it can
produce syntactically invalid C code. That happens when the
affected "if"-statement has a matching "else".
E.g., it would transform this
if (x)
free (x);
else
foo ();
into this:
free (x);
else
foo ();
There were none of those here, either.
If you're interested in automating detection of the useless
tests, you might like the useless-if-before-free script in gnulib:
[it *does* detect brace-enclosed free statements, and has a --name=S
option to make it detect free-like functions with different names]
http://git.sv.gnu.org/gitweb/?p=gnulib.git;a=blob;f=build-aux/useless-if-before-free
Addendum:
Remove one more (in imap-send.c), spotted by Jean-Luc Herren <jlh@gmx.ch>.
Signed-off-by: Jim Meyering <meyering@redhat.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A failure in prepare_revision_walk can be caused by
a not parseable object.
Signed-off-by: Martin Koegler <mkoegler@auto.tuwien.ac.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* maint:
config: add test cases for empty value and no value config variables.
cvsimport: have default merge regex also match beginning of commit message
git clone -s documentation: force a new paragraph for the NOTE
status: suggest "git rm --cached" to unstage for initial commit
Protect get_author_ident_from_commit() from filenames in work tree
upload-pack: Initialize the exec-path.
bisect: use verbatim commit subject in the bisect log
git-cvsimport.txt: fix '-M' description.
Revert "pack-objects: only throw away data during memory pressure"
This reverts commit 9c2174350c.
Nico analyzed and found out that this does not really help, and
I agree with it.
By the time this gets into action and data is actively thrown
away, performance simply goes down the drain due to the data
constantly being reloaded over and over and over and over and
over and over again, to the point of virtually making no
relative progress at all. The previous behavior of enforcing
the memory limit by dynamically shrinking the window size at
least had the effect of allowing some kind of progress, even if
the end result wouldn't be optimal.
And that's the whole point behind this memory limiting feature:
allowing some progress to be made when resources are too limited
to let the repack go unbounded.
If pack-objects hit the memory limit, it deletes objects from the delta
window.
This patch make it only delete the data, which is recomputed, if needed again.
Signed-off-by: Martin Koegler <mkoegler@auto.tuwien.ac.at>
Acked-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git pack-objects" has the option --max-pack-size to limit the file
size of the packs to a certain amount of bytes. On platforms where
the pack file size is limited by filesystem constraints, it is easy
to forget this option, and this option does not exist for "git gc"
to begin with.
So introduce a config variable to set the default maximum, but make
this overrideable by the command line.
Suggested by Tor Arvid Lund.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When partitioning the work amongst threads, dividing the number of
objects by the number of threads may return 0 when there are less
objects than threads; this will cause the subsequent code to segfault
when accessing list[sub_size-1]. Allow some threads to have
zero objects to work on instead of barfing, while letting others
to have more.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We somehow called deflateEnd() on a stream that we have called
deflateEnd() on already.
In fact, the second deflateEnd() has always been returning
Z_STREAM_ERROR. We just never checked the error return from
that particular deflateEnd().
The first one returns 0 for success. We might want to tighten
the check even more to check that.
Noticed by Marco.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A NUL byte at beginning of file, or just after a newline
would provoke an invalid buf[-1] access in a few places.
* builtin-grep.c (cmd_grep): Don't access buf[-1].
* builtin-pack-objects.c (get_object_list): Likewise.
* builtin-rev-list.c (read_revisions_from_stdin): Likewise.
* bundle.c (read_bundle_header): Likewise.
* server-info.c (read_pack_info_file): Likewise.
* transport.c (insert_packed_refs): Likewise.
Signed-off-by: Jim Meyering <meyering@redhat.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A mutex and a condition variable is allocated for each thread and torn
down when the thread terminates. However, for certain workloads it can
happen that some threads are actually not started at all. In this case
we would leak the mutex and condition variable. Now we allocate them only
for those threads that are actually started.
Signed-off-by: Johannes Sixt <johannes.sixt@telecom.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the threaded pack-objects code the main thread and the worker threads
must mutually signal that they have assigned a new pack of work or have
completed their work, respectively. Previously, the code used mutexes that
were locked in one thread and unlocked from a different thread, which is
bogus (and happens to work on Linux).
Here we rectify the implementation by using condition variables: There is
one condition variable on which the main thread waits until a thread
requests new work; and each worker thread has its own condition variable
on which it waits until it is assigned new work or signaled to terminate.
As a cleanup, the worker threads are spawned only after the initial work
packages have been assigned.
Signed-off-by: Johannes Sixt <johannes.sixt@telecom.at>
Acked-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The code that splits the object list amongst work threads tries to do so
on "path" boundaries not to prevent good delta matches. However, in
some cases, a few paths may largely dominate the hash distribution and
it is not possible to have good load balancing without ignoring those
boundaries.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The current method consists of a master thread serving chunks of objects
to work threads when they're done with their previous chunk. The issue
is to determine the best chunk size: making it too large creates poor
load balancing, while making it too small has a negative effect on pack
size because of the increased number of chunk boundaries and poor delta
window utilization.
This patch implements a completely different approach by initially
splitting the work in large chunks uniformly amongst all threads, and
whenever a thread is done then it steals half of the remaining work from
another thread with the largest amount of unprocessed objects.
This has the advantage of greatly reducing the number of chunk boundaries
with an almost perfect load balancing.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It is currently sorted and then walked backward. Not only this doesn't
feel natural for my poor brain, but it would make the next patch less
obvious as well.
So reverse the sort order, and reverse the list walking direction,
which effectively produce the exact same end result as before.
Also bring the relevant comment nearer the actual code and adjust it
accordingly, with minor additional clarifications.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>