Commit Graph

85 Commits

Author SHA1 Message Date
Junio C Hamano
9d24ed4f01 Merge branch 'ff/c99' into next
* ff/c99:
  Remove all void-pointer arithmetic.
2006-06-21 03:51:59 -07:00
Junio C Hamano
363b7817e0 upload-pack: prepare for sideband message support.
This does not implement sideband for propagating the status to
the downloader yet, but add code to capture the standard error
output from the pack-objects process in preparation for sending
it off to the client when the protocol extension allows us to do
so.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-21 02:34:14 -07:00
Florian Forster
1d7f171c3a Remove all void-pointer arithmetic.
ANSI C99 doesn't allow void-pointer arithmetic. This patch fixes this in
various ways. Usually the strategy that required the least changes was used.

Signed-off-by: Florian Forster <octo@verplant.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-20 01:59:46 -07:00
Linus Torvalds
ce0bd64299 pack-objects: improve path grouping heuristics.
This trivial patch not only simplifies the name hashing, it actually
improves packing for both git and the kernel.

The git archive pack shrinks from 6824090->6622627 bytes (a 3%
improvement), and the kernel pack shrinks from 108756213 to 108219021 (a
mere 0.5% improvement, but still, it's an improvement from making the
hashing much simpler!)

We just create a 32-bit hash, where we "age" previous characters by two
bits, so the last characters in a filename count most. So when we then
compare the hashes in the sort routine, filenames that end the same way
sort the same way.

It takes the subdirectory into account (unless the filename is > 16
characters), but files with the same name within the same subdirectory
will obviously sort closer than files in different subdirectories.

And, incidentally (which is why I tried the hash change in the first
place, of course) builtin-rev-list.c will sort fairly close to rev-list.c.

And no, it's not a "good hash" in the sense of being secure or unique, but
that's not what we're looking for. The whole "hash" thing is misnamed
here. It's not so much a hash as a "sorting number".

[jc: rolled in simplification for computing the sorting number
 computation for thin pack base objects]

Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-05 17:23:31 -07:00
Linus Torvalds
4c068a9831 tree_entry(): new tree-walking helper function
This adds a "tree_entry()" function that combines the common operation of
doing a "tree_entry_extract()" + "update_tree_entry()".

It also has a simplified calling convention, designed for simple loops
that traverse over a whole tree: the arguments are pointers to the tree
descriptor and a name_entry structure to fill in, and it returns a boolean
"true" if there was an entry left to be gotten in the tree.

This allows tree traversal with

	struct tree_desc desc;
	struct name_entry entry;

	desc.buf = tree->buffer;
	desc.size = tree->size;
	while (tree_entry(&desc, &entry) {
		... use "entry.{path, sha1, mode, pathlen}" ...
	}

which is not only shorter than writing it out in full, it's hopefully less
error prone too.

[ It's actually a tad faster too - we don't need to recalculate the entry
  pathlength in both extract and update, but need to do it only once.
  Also, some callers can avoid doing a "strlen()" on the result, since
  it's returned as part of the name_entry structure.

  However, by now we're talking just 1% speedup on "git-rev-list --objects
  --all", and we're definitely at the point where tree walking is no
  longer the issue any more. ]

NOTE! Not everybody wants to use this new helper function, since some of
the tree walkers very much on purpose do the descriptor update separately
from the entry extraction. So the "extract + update" sequence still
remains as the core sequence, this is just a simplified interface.

We should probably add a silly two-line inline helper function for
initializing the descriptor from the "struct tree" too, just to cut down
on the noise from that common "desc" initializer.

Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-30 23:03:01 -07:00
Junio C Hamano
0fa6417c49 Merge branch 'np/pack'
* np/pack:
  improve depth heuristic for maximum delta size
  pack-object: slightly more efficient
  simple euristic for further free packing improvements
2006-05-16 17:20:24 -07:00
Nicolas Pitre
c3b06a69ff improve depth heuristic for maximum delta size
This provides a linear decrement on the penalty related to delta depth
instead of being an 1/x function.  With this another 5% reduction is
observed on packs for both the GIT repo and the Linux kernel repo, as
well as fixing a pack size regression in another sample repo I have.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-16 13:35:46 -07:00
Junio C Hamano
d55aaefa3e Merge branch 'fix'
* fix:
  Fix pack-index issue on 64-bit platforms a bit more portably.
  Install git-send-email by default
  Fix compilation on newer NetBSD systems
  git config syntax updates
  Another config file parsing fix.
  checkout: use --aggressive when running a 3-way merge (-m).
2006-05-15 13:48:22 -07:00
Junio C Hamano
1b9bc5a7b7 Fix pack-index issue on 64-bit platforms a bit more portably.
Apparently <stdint.h> is not enough for uint32_t on OpenBSD; use
"unsigned int" -- hopefully that would stay 32-bit on every
platform we care about, at least until we update the pack-index
file format.

Our sha1 routines optimized for architectures use uint32_t and
expects '#include <stdint.h>' to be enough, so OpenBSD on arm or
ppc might have similar issues down the road, I dunno.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-15 13:01:37 -07:00
Nicolas Pitre
ff45715ce5 pack-object: slightly more efficient
Avoid creating a delta index for objects with maximum depth since they
are not going to be used as delta base anyway.  This also reduce peak
memory usage slightly as the current object's delta index is not useful
until the next object in the loop is considered for deltification. This
saves a bit more than 1% on CPU usage.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-15 12:32:13 -07:00
Nicolas Pitre
4e8da19581 simple euristic for further free packing improvements
Given that the early eviction of objects with maximum delta depth
may exhibit bad packing on its own, why not considering a bias against
deep base objects in try_delta() to mitigate that bad behavior.

This patch adjust the MAX_size allowed for a delta based on the depth of
the base object as well as enabling the early eviction of max depth
objects from the object window.  When used separately, those two things
produce slightly better and much worse results respectively.  But their
combined effect is a surprising significant packing improvement.

With this really simple patch the GIT repo gets nearly 15% smaller, and
the Linux kernel repo about 5% smaller, with no significantly measurable
CPU usage difference.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-15 12:31:21 -07:00
Junio C Hamano
975bf9cf5a Merge branch 'fix'
* fix:
  include header to define uint32_t, necessary on Mac OS X
2006-05-14 16:20:15 -07:00
Ben Clifford
d9635e9c53 include header to define uint32_t, necessary on Mac OS X
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-14 16:19:52 -07:00
Junio C Hamano
3a3e89b897 Merge branch 'fix'
* fix:
  Fix git-pack-objects for 64-bit platforms
2006-05-13 22:24:18 -07:00
Dennis Stosberg
66561f5a77 Fix git-pack-objects for 64-bit platforms
The offset of an object in the pack is recorded as a 4-byte integer
in the index file.  When reading the offset from the mmap'ed index
in prepare_pack_revindex(), the address is dereferenced as a long*.
This works fine as long as the long type is four bytes wide.  On
NetBSD/sparc64, however, a long is 8 bytes wide and so dereferencing
the offset produces garbage.

[jc: taking suggestion by Linus to use uint32_t]

Signed-off-by: Dennis Stosberg <dennis@stosberg.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-13 10:43:16 -07:00
Junio C Hamano
4edd44725c Merge branch 'np/delta'
* np/delta:
  improve diff-delta with sparse and/or repetitive data
  tiny optimization to diff-delta
  replace adler32 with Rabin's polynomial in diff-delta
  use delta index data when finding best delta matches
  split the diff-delta interface
2006-05-09 16:40:28 -07:00
Junio C Hamano
86118bcb46 pack-object: squelch eye-candy on non-tty
One of my post-update scripts runs a git-fetch into a separate
repository and sends the results back to me (2>&1); I end up
getting this in the mail:

    Generating pack...
    Done counting 180 objects.
    Result has 131 objects.
    Deltifying 131 objects.
       0% (0/131) done^M   1% (2/131) done^M...

This defaults not to do the progress report when not on a tty.

You could give --progress to force the progress report, but
let's not bother even documenting it nor mentioning it in the
usage string.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-05 15:24:12 -07:00
Junio C Hamano
9a8b6a0a9d pack-objects: update size heuristucs.
We used to omit delta base candidates that is much bigger than
the target, but delta size does not grow when we delete more, so
that was not a very good heuristics.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-04-27 19:31:46 -07:00
Nicolas Pitre
f6c7081aa9 use delta index data when finding best delta matches
This patch allows for computing the delta index for each base object
only once and reuse it when trying to find the best delta match.

This should set the mark and pave the way for possibly better delta
generator algorithms.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-04-26 21:23:03 -07:00
Nicolas Pitre
0dec30b978 fix pack-object buffer size
The input line has 40 _chars_ of sha1 and no 20 _bytes_. It should also
account for the space before the pathname, and the terminating \n and \0.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-04-21 00:45:10 -07:00
Junio C Hamano
f527cb8c38 pack-objects: do not stop at object that is "too small"
Because we sort the delta window by name-hash and then size,
hitting an object that is too small to consider as a delta base
for the current object does not mean we do not have better
candidate in the window beyond it.

Noticed by Shawn Pearce, analyzed by Nico, Linus and me.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-04-20 23:36:22 -07:00
Junio C Hamano
5379a5c5ee Thin pack generation: optimization.
Jens Axboe noticed that recent "git push" has become very slow
since we made --thin transfer the default.

Thin pack generation to push a handful revisions that touch
relatively small number of paths out of huge tree was stupid; it
registered _everything_ from the excluded revisions.  As a
result, "Counting objects" phase was unnecessarily expensive.

This changes the logic to register the blobs and trees from
excluded revisions only for paths we are actually going to send
to the other end.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-04-07 02:08:38 -07:00
Junio C Hamano
810e152375 Merge branch 'pe/cleanup'
* pe/cleanup:
  Replace xmalloc+memset(0) with xcalloc.
  Use blob_, commit_, tag_, and tree_type throughout.
2006-04-04 13:43:00 -07:00
Junio C Hamano
4c61b7d15a Merge branch 'lt/fix-sol-pack'
* lt/fix-sol-pack:
  Use sigaction and SA_RESTART in read-tree.c; add option in Makefile.
  safe_fgets() - even more anal fgets()
  pack-objects: be incredibly anal about stdio semantics
  Fix Solaris stdio signal handling stupidities
2006-04-04 13:42:02 -07:00
Peter Eriksen
8e44025925 Use blob_, commit_, tag_, and tree_type throughout.
This replaces occurences of "blob", "commit", "tag", and "tree",
where they're really used as type specifiers, which we already
have defined global constants for.

Signed-off-by: Peter Eriksen <s022018@student.dtu.dk>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-04-04 00:11:19 -07:00
Junio C Hamano
687dd75c95 safe_fgets() - even more anal fgets()
This is from Linus -- the previous round forgot to clear error
after EINTR case.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-04-03 23:42:25 -07:00
Linus Torvalds
da93d12b00 pack-objects: be incredibly anal about stdio semantics
This is the "letter of the law" version of using fgets() properly in the
face of incredibly broken stdio implementations.  We can work around the
Solaris breakage with SA_RESTART, but in case anybody else is ever that
stupid, here's the "safe" (read: "insanely anal") way to use fgets.

It probably goes without saying that I'm not terribly impressed by
Solaris libc.

Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-04-02 13:46:27 -07:00
Linus Torvalds
fb7a6531e6 Fix Solaris stdio signal handling stupidities
This uses sigaction() to install the SIGALRM handler with SA_RESTART, so
that Solaris stdio doesn't break completely when a signal interrupts a
read.

Thanks to Jason Riedy for confirming the silly Solaris signal behaviour.

Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-04-02 13:41:56 -07:00
Junio C Hamano
1b0c7174a1 tree/diff header cleanup.
Introduce tree-walk.[ch] and move "struct tree_desc" and
associated functions from various places.

Rename DIFF_FILE_CANON_MODE(mode) macro to canon_mode(mode) and
move it to cache.h.  This macro returns the canonicalized
st_mode value in the host byte order for files, symlinks and
directories -- to be compared with a tree_desc entry.
create_ce_mode(mode) in cache.h is similar but is intended to be
used for index entries (so it does not work for directories) and
returns the value in the network byte order.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-03-29 23:54:13 -08:00
Junio C Hamano
70ca1a3f85 pack-objects: simplify "thin" pack.
There was a misguided logic to overly prefer using objects that
we are not going to pack as the base object.  This was
unnecessary.  It does not matter to the unpacking side where the
base object is -- it matters more to make the resulting delta
smaller.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-03-06 03:03:56 -08:00
Luck, Tony
2b74cffa91 Re-fix compilation warnings.
Commit 8fcf1ad9c6 has a
combination of double cast and Andreas' switch to using
unsigned long ... just the latter is sufficient (and a lot less
ugly than using the double cast).

Signed-off-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-03-01 17:42:00 -08:00
Timo Hirvonen
962554c616 Use setenv(), fix warnings
- Fix -Wundef -Wold-style-definition warnings
  - Make pll_free() static

[jc: original patch by Timo had another unrelated bits:

  - Use setenv() instead of putenv()

 I'm postponing that part for now.]

Signed-off-by: Timo Hirvonen <tihirvon@gmail.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-26 15:06:45 -08:00
Luck, Tony
8fcf1ad9c6 fix warning from pack-objects.c
When compiling on ia64 I get this warning (from gcc 3.4.3):

gcc -o pack-objects.o -c -g -O2 -Wall -DSHA1_HEADER='<openssl/sha.h>'  pack-objects.c
pack-objects.c: In function `pack_revindex_ix':
pack-objects.c:94: warning: cast from pointer to integer of different size

A double cast (first to long, then to int) shuts gcc up, but is there
a better way?

[jc: Andreas Ericsson suggests to use ulong instead. ]

Signed-off-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-24 22:17:20 -08:00
Junio C Hamano
f0b0af1b04 Merge branches 'jc/rev-list' and 'jc/pack-thin'
* jc/rev-list:
  rev-list --objects: use full pathname to help hashing.
  rev-list --objects-edge: remove duplicated edge commit output.
  rev-list --objects-edge

* jc/pack-thin:
  pack-objects: hash basename and direname a bit differently.
  pack-objects: allow "thin" packs to exceed depth limits
  pack-objects: use full pathname to help hashing with "thin" pack.
  pack-objects: thin pack micro-optimization.
  Use thin pack transfer in "git fetch".
  Add git-push --thin.
  send-pack --thin: use "thin pack" delta transfer.
  Thin pack - create packfile with missing delta base.

Conflicts:

	pack-objects.c (taking "next")
	send-pack.c (taking "next")
2006-02-24 21:55:23 -08:00
Junio C Hamano
eeef7135fe pack-objects: hash basename and direname a bit differently.
...so that "Makefile"s from different revs are sorted together,
separate from "t/Makefile"s, but close enough.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-23 23:51:01 -08:00
Junio C Hamano
b76f6b6278 pack-objects: allow "thin" packs to exceed depth limits
When creating a new pack to be used in .git/objects/pack/
directory, we carefully count the depth of deltified objects to
be reused, so that the generated pack does not to exceed the
specified depth limit for runtime efficiency.  However, when we
are generating a thin pack that does not contain base objects,
such a pack can only be used during network transfer that is
expanded on the other end upon reception, so being careful and
artificially cutting the delta chain does not buy us anything
except increased bandwidth requirement.  This patch disables the
delta chain depth limit check when reusing an existing delta.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-23 23:04:52 -08:00
Junio C Hamano
1d6b38cc76 pack-objects: use full pathname to help hashing with "thin" pack.
This uses the same hashing algorithm to the "preferred base
tree" objects and the incoming pathnames, to group the same
files from different revs together, while spreading files with
the same basename in different directories.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-22 23:07:20 -08:00
Junio C Hamano
b925410d10 pack-objects: thin pack micro-optimization.
Since we sort objects by type, hash, preferredness and then
size, after we have a delta against preferred base, there is no
point trying a delta with non-preferred base.  This seems to
save expensive calls to diff-delta and it also seems to save the
output space as well.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-22 21:45:45 -08:00
Junio C Hamano
183bdb2ccc pack-objects eye-candy: finishing touches.
This updates the progress output to match "every one second or
every percent whichever comes early" used by unpack-objects, as
discussed on the list.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-22 16:02:59 -08:00
Nicolas Pitre
5e8dc750ee also adds progress when actually writing a pack
If that pack is big, it takes significant time to write and might
benefit from some more eye candies as well.  This is however disabled
when the pack is written to stdout since in that case the output is
usually piped into unpack_objects which already does its own progress
reporting.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-22 14:51:58 -08:00
Nicolas Pitre
b2504a0d2f nicer eye candies for pack-objects
This provides a stable and simpler progress reporting mechanism that
updates progress as often as possible but accurately not updating more
than once a second.  The deltification phase is also made more
interesting to watch (since repacking a big repository and only seeing a
dot appear once every many seconds is rather boring and doesn't provide
much food for anticipation).

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-22 13:15:26 -08:00
Junio C Hamano
15b4d577ae pack-objects: avoid delta chains that are too long.
This tries to rework the solution for the excess delta chain
problem. An earlier commit worked it around ``cheaply'', but
repeated repacking risks unbound growth of delta chains.

This version counts the length of delta chain we are reusing
from the existing pack, and makes sure a base object that has
sufficiently long delta chain does not get deltified.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-22 13:14:57 -08:00
Junio C Hamano
ab7cd7bb8c pack-objects: finishing touches.
This introduces --no-reuse-delta option to disable reusing of
existing delta, which is a large part of the optimization
introduced by this series.  This may become necessary if
repeated repacking makes delta chain too long.  With this, the
output of the command becomes identical to that of the older
implementation.  But the performance suffers greatly.

It still allows reusing non-deltified representations; there is
no point uncompressing and recompressing the whole text.

It also adds a couple more statistics output, while squelching
it under -q flag, which the last round forgot to do.

  $ time old-git-pack-objects --stdout >/dev/null <RL
  Generating pack...
  Done counting 184141 objects.
  Packing 184141 objects....................
  real    12m8.530s       user    11m1.450s       sys     0m57.920s
  $ time git-pack-objects --stdout >/dev/null <RL
  Generating pack...
  Done counting 184141 objects.
  Packing 184141 objects.....................
  Total 184141, written 184141 (delta 138297), reused 178833 (delta 134081)
  real    0m59.549s       user    0m56.670s       sys     0m2.400s
  $ time git-pack-objects --stdout --no-reuse-delta >/dev/null <RL
  Generating pack...
  Done counting 184141 objects.
  Packing 184141 objects.....................
  Total 184141, written 184141 (delta 134833), reused 47904 (delta 0)
  real    11m13.830s      user    9m45.240s       sys     0m44.330s

There is one remaining issue when --no-reuse-delta option is not
used.  It can create delta chains that are deeper than specified.

    A<--B<--C<--D   E   F   G

Suppose we have a delta chain A to D (A is stored in full either
in a pack or as a loose object. B is depth1 delta relative to A,
C is depth2 delta relative to B...) with loose objects E, F, G.
And we are going to pack all of them.

B, C and D are left as delta against A, B and C respectively.
So A, E, F, and G are examined for deltification, and let's say
we decided to keep E expanded, and store the rest as deltas like
this:

    E<--F<--G<--A

Oops.  We ended up making D a bit too deep, didn't we?  B, C and
D form a chain on top of A!

This is because we did not know what the final depth of A would
be, when we checked objects and decided to keep the existing
delta.  Unfortunately, deferring the decision until just before
the deltification is not an option.  To be able to make B, C,
and D candidates for deltification with the rest, we need to
know the type and final unexpanded size of them, but the major
part of the optimization comes from the fact that we do not read
the delta data to do so -- getting the final size is quite an
expensive operation.

To prevent this from happening, we should keep A from being
deltified.  But how would we tell that, cheaply?

To do this most precisely, after check_object() runs, each
object that is used as the base object of some existing delta
needs to be marked with the maximum depth of the objects we
decided to keep deltified (in this case, D is depth 3 relative
to A, so if no other delta chain that is longer than 3 based on
A exists, mark A with 3).  Then when attempting to deltify A, we
would take that number into account to see if the final delta
chain that leads to D becomes too deep.

However, this is a bit cumbersome to compute, so we would cheat
and reduce the maximum depth for A arbitrarily to depth/4 in
this implementation.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-22 13:14:57 -08:00
Junio C Hamano
3f9ac8d259 pack-objects: reuse data from existing packs.
When generating a new pack, notice if we have already needed
objects in existing packs.  If an object is stored deltified,
and its base object is also what we are going to pack, then
reuse the existing deltified representation unconditionally,
bypassing all the expensive find_deltas() and try_deltas()
calls.

Also, notice if what we are going to write out exactly match
what is already in an existing pack (either deltified or just
compressed).  In such a case, we can just copy it instead of
going through the usual uncompressing & recompressing cycle.

Without this patch, in linux-2.6 repository with about 1500
loose objects and a single mega pack:

    $ git-rev-list --objects v2.6.16-rc3 >RL
    $ wc -l RL
    184141 RL
    $ time git-pack-objects p <RL
    Generating pack...
    Done counting 184141 objects.
    Packing 184141 objects....................
    a1fc7b3e537fcb9b3c46b7505df859f0a11e79d2

    real    12m4.323s
    user    11m2.560s
    sys     0m55.950s

With this patch, the same input:

    $ time ../git.junio/git-pack-objects q <RL
    Generating pack...
    Done counting 184141 objects.
    Packing 184141 objects.....................
    a1fc7b3e537fcb9b3c46b7505df859f0a11e79d2
    Total 184141, written 184141, reused 182441

    real    1m2.608s
    user    0m55.090s
    sys     0m1.830s

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-22 13:14:56 -08:00
Junio C Hamano
7a979d99ba Thin pack - create packfile with missing delta base.
This goes together with "rev-list --object-edge" change, to feed
pack-objects list of edge commits in addition to the usual
object list.  Upon seeing such list, pack-objects loosens the
usual "self contained delta" constraints, and can produce delta
against blobs and trees contained in the edge commits without
storing the delta base objects themselves.

The resulting packfile is not usable in .git/object/packs, but
is a good way to implement "delta-only" transfer.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-19 22:27:39 -08:00
Junio C Hamano
e4c9327a77 pack-objects: avoid delta chains that are too long.
This tries to rework the solution for the excess delta chain
problem. An earlier commit worked it around ``cheaply'', but
repeated repacking risks unbound growth of delta chains.

This version counts the length of delta chain we are reusing
from the existing pack, and makes sure a base object that has
sufficiently long delta chain does not get deltified.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-17 21:48:48 -08:00
Junio C Hamano
ca5381d43e pack-objects: finishing touches.
This introduces --no-reuse-delta option to disable reusing of
existing delta, which is a large part of the optimization
introduced by this series.  This may become necessary if
repeated repacking makes delta chain too long.  With this, the
output of the command becomes identical to that of the older
implementation.  But the performance suffers greatly.

It still allows reusing non-deltified representations; there is
no point uncompressing and recompressing the whole text.

It also adds a couple more statistics output, while squelching
it under -q flag, which the last round forgot to do.

  $ time old-git-pack-objects --stdout >/dev/null <RL
  Generating pack...
  Done counting 184141 objects.
  Packing 184141 objects....................
  real    12m8.530s       user    11m1.450s       sys     0m57.920s
  $ time git-pack-objects --stdout >/dev/null <RL
  Generating pack...
  Done counting 184141 objects.
  Packing 184141 objects.....................
  Total 184141, written 184141 (delta 138297), reused 178833 (delta 134081)
  real    0m59.549s       user    0m56.670s       sys     0m2.400s
  $ time git-pack-objects --stdout --no-reuse-delta >/dev/null <RL
  Generating pack...
  Done counting 184141 objects.
  Packing 184141 objects.....................
  Total 184141, written 184141 (delta 134833), reused 47904 (delta 0)
  real    11m13.830s      user    9m45.240s       sys     0m44.330s

There is one remaining issue when --no-reuse-delta option is not
used.  It can create delta chains that are deeper than specified.

    A<--B<--C<--D   E   F   G

Suppose we have a delta chain A to D (A is stored in full either
in a pack or as a loose object. B is depth1 delta relative to A,
C is depth2 delta relative to B...) with loose objects E, F, G.
And we are going to pack all of them.

B, C and D are left as delta against A, B and C respectively.
So A, E, F, and G are examined for deltification, and let's say
we decided to keep E expanded, and store the rest as deltas like
this:

    E<--F<--G<--A

Oops.  We ended up making D a bit too deep, didn't we?  B, C and
D form a chain on top of A!

This is because we did not know what the final depth of A would
be, when we checked objects and decided to keep the existing
delta.  Unfortunately, deferring the decision until just before
the deltification is not an option.  To be able to make B, C,
and D candidates for deltification with the rest, we need to
know the type and final unexpanded size of them, but the major
part of the optimization comes from the fact that we do not read
the delta data to do so -- getting the final size is quite an
expensive operation.

To prevent this from happening, we should keep A from being
deltified.  But how would we tell that, cheaply?

To do this most precisely, after check_object() runs, each
object that is used as the base object of some existing delta
needs to be marked with the maximum depth of the objects we
decided to keep deltified (in this case, D is depth 3 relative
to A, so if no other delta chain that is longer than 3 based on
A exists, mark A with 3).  Then when attempting to deltify A, we
would take that number into account to see if the final delta
chain that leads to D becomes too deep.

However, this is a bit cumbersome to compute, so we would cheat
and reduce the maximum depth for A arbitrarily to depth/4 in
this implementation.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-17 02:11:38 -08:00
Junio C Hamano
a49dd05fd0 pack-objects: reuse data from existing packs.
When generating a new pack, notice if we have already needed
objects in existing packs.  If an object is stored deltified,
and its base object is also what we are going to pack, then
reuse the existing deltified representation unconditionally,
bypassing all the expensive find_deltas() and try_deltas()
calls.

Also, notice if what we are going to write out exactly match
what is already in an existing pack (either deltified or just
compressed).  In such a case, we can just copy it instead of
going through the usual uncompressing & recompressing cycle.

Without this patch, in linux-2.6 repository with about 1500
loose objects and a single mega pack:

    $ git-rev-list --objects v2.6.16-rc3 >RL
    $ wc -l RL
    184141 RL
    $ time git-pack-objects p <RL
    Generating pack...
    Done counting 184141 objects.
    Packing 184141 objects....................
    a1fc7b3e537fcb9b3c46b7505df859f0a11e79d2

    real    12m4.323s
    user    11m2.560s
    sys     0m55.950s

With this patch, the same input:

    $ time ../git.junio/git-pack-objects q <RL
    Generating pack...
    Done counting 184141 objects.
    Packing 184141 objects.....................
    a1fc7b3e537fcb9b3c46b7505df859f0a11e79d2
    Total 184141, written 184141, reused 182441

    real    1m2.608s
    user    0m55.090s
    sys     0m1.830s

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-17 02:11:38 -08:00
Junio C Hamano
024701f1d8 Make pack-objects chattier.
You could give -q to squelch it, but currently no tool does it.
This would make 'git clone host:repo here' over ssh not silent
again.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-12 13:01:54 -08:00
Junio C Hamano
21fcd1bdea fetch-clone progress: finishing touches.
This makes fetch-pack also report the progress of packing part.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-11 17:54:18 -08:00