With large amount of objects, check_object() is really trashing the pack
sliding map and the filesystem cache. It has a completely random access
pattern especially with old objects where delta replay jumps back and
forth all over the pack.
This patch improves things by:
1) sorting objects by their offset in pack before calling check_object()
so the pack access pattern is linear;
2) recording the object type at add_object_entry() time since it is
already known in most cases;
3) recording the pack offset even for preferred_base objects;
4) avoid calling sha1_object_info() if all possible.
This limits pack accesses to the bare minimum and makes them perfectly
linear.
In the process check_object() was made more clear (to me at least).
Note: I thought about walking the sorted_by_offset list backward in
get_object_details() so if a pack happens to be larger than the available
file cache, then the cache would have been populated with useful data from
the beginning of the pack already when find_deltas() is called. Strangely,
testing (on Linux) showed absolutely no performance difference.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
... which consists of existing code split out of packed_delta_info()
for other callers to use it as well.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
It currently aliases delta_size on the principle that reused deltas won't
go through the whole delta matching loop hence delta_size was unused.
This is not true if given delta doesn't find its base in the pack though.
But we need that information even for whole object data reuse.
Well in short the current state looks awful and is prone to bugs. It just
works fine now because try_delta() tests trg_entry->delta before using
trg_entry->delta_size, but that is a bit subtle and I was wondering for a
while why things just worked fine... even if I'm guilty of having
introduced this abomination myself in the first place.
Let's do the sensible thing instead with no ambiguity, which is to have
a separate variable for in_pack_header_size. This might even help future
optimizations.
While at it, let's reorder some struct object_entry members so they all
align well with their own width, regardless of the architecture or the
size of off_t. Some memory saving is to be expected with this alone.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Because we don't have to know the SHA1 h(hence the name) of the pack
up front anymore, let's get rid of yet another global sorted object list
and sort them only in write_index_file(), then compute the object list
SHA1 on the fly.
This has the advantage of saving another chunk of memory, and the sorted
list SHA1 won't be computed needlessly on servers during a fetch.
Of course the cunning plan is also to make write_index_file() much like
the function with the same name in index-pack.c for an eventual easy
sharing.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This capability is practically never useful, and therefore never tested,
because it is fairly unlikely that the requested pack will be already
available. Furthermore it is of little gain over the ability to reuse
existing pack data.
In fact the ability to change delta type on the fly when reusing delta
data is a nice thing that has almost no cost and allows greater backward
compatibility with a client's capabilities than if the client is blindly
sent a whole pack without any discrimination.
And this "feature" is simply in the way of other cleanups.
Let's get rid of it.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Get rid of sort_comparator() as it impose a run time double indirect
function call for little compile time type checking gain.
Also get rid of create_sorted_list() as it only has one user which would
as well be just fine doing its sorting locally. Eventually the list of
deltifiable objects might be shorter than the whole object list.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Objects that have delta "children" from pack data reuse must consider the
depth of their deepest child when they try to deltify themselves for those
children not to become too deep.
However, in the context of a "thin" pack, the delta children depth was
skipped entirely on the presumption that the pack was always going to be
exploded on the receiving end, hence the delta length wasn't an issue.
Now that we keep received packs as is and reuse pack data when repacking,
those packs do contain delta chains that are longer than expected. Worse,
those delta chain may even grow longer when the pack is further repacked
into another thin pack for a subsequent transmission.
So this patch restores strict delta length even for thin packs, and it
moves check_delta_limit() usage directly in the delta loop where it is
needed. This way the delta_limit can be removed from struct object_entry
as well. Oh and the initial value was wrong too.
The progress_interval() function was moved to a more logical location in
the process.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Before finding best delta combinations, we sort objects by name hash,
then by size, then by their position in memory. Then we walk the list
backwards to test delta candidates.
We hope that a bigger size usually means a newer objects. But a bigger
address in memory does not mean a newer object. So the last comparison
must be reversed.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Let's avoid some cycles when there is no base to test against, and avoid
unnecessary object lookups.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This function used to call locate_object_entry_hash() _twice_ per added
object while only once should suffice. Let's reorganize that code a bit.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This is a fairly complete list of tests for various aspects of pack
index versions 1 and 2.
Tests on index v2 include 32-bit and 64-bit offsets, as well as a nice
demonstration of the flawed repacking integrity checks that index
version 2 intend to solve over index version 1 with the per object CRC.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Reliance on /dev/urandom produces test vectors that are, well, random.
This can cause problems impossible to track down when the data is
different from one test invokation to another.
The goal is not to have random data to test, but rather to have a
convenient way to create sets of large files with non compressible and
non deltifiable data in a reproducible way.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This replaces the inflate validation with a CRC validation when reusing
data from a pack which uses index version 2. That makes repacking much
safer against corruptions, and it should be a bit faster too.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This is necessary for testing the new capabilities in some automated
way without having an actual 4GB+ pack.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Initially the conversion was made using nth_packed_object_sha1() which
made this file completely index version agnostic. Unfortunately the
overhead was quite significant so I went back to raw index walking but
with selectable base and step values which brought back similar
performances as the original.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
When index v2 is encountered, the CRC32 of each object is also displayed
in parenthesis at the end of the line.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
With this patch, packs larger than 4GB are usable, even on a 32-bit machine
(at least on Linux). If off_t is not large enough to deal with a large
pack then die() is called instead of attempting to use the pack and
producing garbage.
This was tested with a 8GB pack specially created for the occasion on
a 32-bit machine.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Like previous patch but for index-pack.
[ There is quite some code duplication between pack-objects and index-pack
for generating a pack index (and fast-import as well I suppose). This
should be reworked into a common function eventually. But not now. ]
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Pack index version 2 goes as follows:
- 8 bytes of header with signature and version.
- 256 entries of 4-byte first-level fan-out table.
- Table of sorted 20-byte SHA1 records for each object in pack.
- Table of 4-byte CRC32 entries for raw pack object data.
- Table of 4-byte offset entries for objects in the pack if offset is
representable with 31 bits or less, otherwise it is an index in the next
table with top bit set.
- Table of 8-byte offset entries indexed from previous table for offsets
which are 32 bits or more (optional).
- 20-byte SHA1 checksum of sorted object names.
- 20-byte SHA1 checksum of the above.
The object SHA1 table is all contiguous so future pack format that would
contain this table directly won't require big changes to the code. It is
also tighter for slightly better cache locality when looking up entries.
Support for large packs exceeding 31 bits in size won't impose an index
size bloat for packs within that range that don't need a 64-bit offset.
And because newer objects which are likely to be the most frequently used
are located at the beginning of the pack, they won't pay the 64-bit offset
lookup at run time either even if the pack is large.
Right now an index version 2 is created only when the biggest offset in a
pack reaches 31 bits. It might be a good idea to always use index version
2 eventually to benefit from the CRC32 it contains when reusing pack data
while repacking.
[jc: with the "oops" fix to keep track of the last offset correctly]
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
The most important optimization for performance when repacking is the
ability to reuse data from a previous pack as is and bypass any delta
or even SHA1 computation by simply copying the raw data from one pack
to another directly.
The problem with this is that any data corruption within a copied object
would go unnoticed and the new (repacked) pack would be self-consistent
with its own checksum despite containing a corrupted object. This is a
real issue that already happened at least once in the past.
In some attempt to prevent this, we validate the copied data by inflating
it and making sure no error is signaled by zlib. But this is still not
perfect as a significant portion of a pack content is made of object
headers and references to delta base objects which are not deflated and
therefore not validated when repacking actually making the pack data reuse
still not as safe as it could be.
Of course a full SHA1 validation could be performed, but that implies
full data inflating and delta replaying which is extremely costly, which
cost the data reuse optimization was designed to avoid in the first place.
So the best solution to this is simply to store a CRC32 of the raw pack
data for each object in the pack index. This way any object in a pack can
be validated before being copied as is in another pack, including header
and any other non deflated data.
Why CRC32 instead of a faster checksum like Adler32? Quoting Wikipedia:
Jonathan Stone discovered in 2001 that Adler-32 has a weakness for very
short messages. He wrote "Briefly, the problem is that, for very short
packets, Adler32 is guaranteed to give poor coverage of the available
bits. Don't take my word for it, ask Mark Adler. :-)" The problem is
that sum A does not wrap for short messages. The maximum value of A for
a 128-byte message is 32640, which is below the value 65521 used by the
modulo operation. An extended explanation can be found in RFC 3309,
which mandates the use of CRC32 instead of Adler-32 for SCTP, the
Stream Control Transmission Protocol.
In the context of a GIT pack, we have lots of small objects, especially
deltas, which are likely to be quite small and in a size range for which
Adler32 is dimed not to be sufficient. Another advantage of CRC32 is the
possibility for recovery from certain types of small corruptions like
single bit errors which are the most probable type of corruptions.
OK what this patch does is to compute the CRC32 of each object written to
a pack within pack-objects. It is not written to the index yet and it is
obviously not validated when reusing pack data yet either.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Change a few size and offset variables to more appropriate type, then
add overflow tests on those offsets. This prevents any bad data to be
generated/processed if off_t happens to not be large enough to handle
some big packs.
Better be safe than sorry.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This patch introduces the MSB() macro to obtain the desired number of
most significant bits from a given variable independently of the variable
type.
It is then used to better implement the overflow test on the OBJ_OFS_DELTA
base offset variable with the property of always working correctly
regardless of the type/size of that variable.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
The coming index format change doesn't allow for the number of objects
to be determined from the size of the index file directly. Instead, Let's
initialize a field in the packed_git structure with the object count when
the index is validated since the count is always known at that point.
While at it let's reorder some struct packed_git fields to avoid padding
due to needed 64-bit alignment for some of them.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
As noted by Junio, --format=tar should be assumed if no format
was specified.
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Junio C Hamano <junkio@cox.net>
* jc/merge-subtree:
A new merge stragety 'subtree'.
It is safe to merge this early as this is a feature that user
explicitly needs to ask for and would not trigger otherwise. A
known issue with the current implementation is that the subtree
matching heuristics is very stupid. It could run ls-tree twice
and try to count intersection.
Giving it wider audience would help it to get improved by
motivated volunteers.
Signed-off-by: Junio C Hamano <junkio@cox.net>
* maint:
Add Documentation/cmd-list.made to .gitignore
git-svn: fix log command to avoid infinite loop on long commit messages
git-svn: dcommit/rebase confused by patches with git-svn-id: lines
git-svn: bail out on incorrect command-line options
This bug has been around since the the conversion to use the
Git.pm library back in October or November. Eventually I'd like
"git rev-list/log" to have the option to not truncate overly
long messages.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
When patches are merged from another git-svn managed branch,
they will have the git-svn-id: metadata line in them (generated
by git-format-patch).
When doing rebase or dcommit via git-svn, this would cause
git-svn to find the wrong upstream branch. We now verify
that the commit is consistent with the value in the .rev_db
file.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
"git svn log" is the only command that needs the pass-through
option in Getopt::Long; otherwise we will bail out and let the
user know something is wrong.
Also, avoid printing out unaccepted mixed-case options (that
are reserved for the command-line) such as --useSvmProps
in the usage() function.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
* 'jc/read-tree-df' (early part):
Fix switching to a branch with D/F when current branch has file D.
Fix twoway_merge that passed d/f conflict marker to merged_entry().
Fix read-tree --prefix=dir/.
unpack-trees: get rid of *indpos parameter.
unpack_trees.c: pass unpack_trees_options structure to keep_entry() as well.
add_cache_entry(): removal of file foo does not conflict with foo/bar
Fix a typo: s/Not/Note/
Some formating fixes: Use ` ` syntax for all filenames and
' ' syntax for all commandline switches.
Signed-off-by: Frank Lichtenheld <frank@lichtenheld.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This merge strategy largely piggy-backs on git-merge-recursive.
When merging trees A and B, if B corresponds to a subtree of A,
B is first adjusted to match the tree structure of A, instead of
reading the trees at the same level. This adjustment is also
done to the common ancestor tree.
If you are pulling updates from git-gui repository into git.git
repository, the root level of the former corresponds to git-gui/
subdirectory of the latter. The tree object of git-gui's toplevel
is wrapped in a fake tree object, whose sole entry has name 'git-gui'
and records object name of the true tree, before being used by
the 3-way merge code.
If you are merging the other way, only the git-gui/ subtree of
git.git is extracted and merged into git-gui's toplevel.
The detection of corresponding subtree is done by comparing the
pathnames and types in the toplevel of the tree.
Heuristics galore! That's the git way ;-).
Signed-off-by: Junio C Hamano <junkio@cox.net>
When pushing into multiple repositories with git push, via
multiple URL in .git/remotes/$shorthand or multiple url
variables in [remote "$shorthand"] section, we used to stop upon
the first failure. Continue the operation and report the
failure at the end.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This came up on #git when somebody was getting 'unable to create
./objects/tmp_oXXXX' but sweared he had write permission to that
directory. It turned out that the repository URL was changed
and he was accessing a repository he does not have a write
permission anymore.
I am not sure how much this would have helped somebody who
believed he was accessing location when the permission of that
location was changed while he was looking the other way, though.
But giving more information on the error path would be better,
and the next change would be helped with this as well.
Signed-off-by: Junio C Hamano <junkio@cox.net>
* jc/index-output:
git-read-tree --index-output=<file>
_GIT_INDEX_OUTPUT: allow plumbing to output to an alternative index file.
Conflicts:
builtin-apply.c
* cc/bisect:
git-bisect: allow bisecting with only one bad commit.
t6030: add a bit more tests to git-bisect
git-bisect: modernization
Documentation: bisect: "start" accepts one bad and many good commits
Bisect: teach "bisect start" to optionally use one bad and many good revs.
* maint:
Documentation: tighten dependency for git.{html,txt}
Makefile: iconv() on Darwin has the old interface
t5300-pack-object.sh: portability issue using /usr/bin/stat
t3200-branch.sh: small language nit
usermanual.txt: some capitalization nits
Make builtin-branch.c handle the git config file
rename_ref(): only print a warning when config-file update fails
Distinguish branches by more than case in tests.
Avoid composing too long "References" header.
cvsimport: Improve formating consistency
cvsimport: Reorder options in documentation for better understanding
cvsimport: Improve usage error reporting
cvsimport: Improve documentation of CVSROOT and CVS module determination
cvsimport: sync usage lines with existing options
Conflicts:
Documentation/Makefile
Every time _any_ documentation page changed, cmds-*.txt files
were regenerated, which caused git.{html,txt} to be remade. Try
not to update cmds-*.txt files if their new contents match the
old ones.
Signed-off-by: Junio C Hamano <junkio@cox.net>
The libiconv on Darwin uses the old iconv() interface (2nd argument is a
const char **, instead of a char **). Add OLD_ICONV to the Darwin
variable definitions to handle this.
Signed-off-by: Arjen Laarhoven <arjen@yaph.org>
Acked-by: Brian Gernhardt <benji@silverinsanity.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
In the test 'compare delta flavors', /usr/bin/stat is used to get file size.
This isn't portable. There already is a dependency on Perl, use its '-s'
operator to get the file size.
Signed-off-by: Arjen Laarhoven <arjen@yaph.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>