sha1-lookup: handle duplicate keys with GIT_USE_LOOKUP
The sha1_entry_pos function tries to be smart about
selecting the middle of a range for its binary search by
looking at the value differences between the "lo" and "hi"
constraints. However, it is unable to cope with entries with
duplicate keys in the sorted list.
We may hit a point in the search where both our "lo" and
"hi" point to the same key. In this case, the range of
values between our endpoints is 0, and trying to scale the
difference between our key and the endpoints over that range
is undefined (i.e., divide by zero). The current code
catches this with an "assert(lov < hiv)".
Moreover, after seeing that the first 20 byte of the key are
the same, we will try to establish a value from the 21st
byte. Which is nonsensical.
Instead, we can detect the case that we are in a run of
duplicates, and simply do a final comparison against any one
of them (since they are all the same, it does not matter
which). If the keys match, we have found our entry (or one
of them, anyway). If not, then we know that we do not need
to look further, as we must be in a run of the duplicate
key.
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-24 02:02:25 +02:00
|
|
|
#!/bin/sh
|
|
|
|
|
|
|
|
test_description='handling of duplicate objects in incoming packfiles'
|
2022-07-01 12:37:32 +02:00
|
|
|
|
|
|
|
TEST_PASSES_SANITIZE_LEAK=true
|
sha1-lookup: handle duplicate keys with GIT_USE_LOOKUP
The sha1_entry_pos function tries to be smart about
selecting the middle of a range for its binary search by
looking at the value differences between the "lo" and "hi"
constraints. However, it is unable to cope with entries with
duplicate keys in the sorted list.
We may hit a point in the search where both our "lo" and
"hi" point to the same key. In this case, the range of
values between our endpoints is 0, and trying to scale the
difference between our key and the endpoints over that range
is undefined (i.e., divide by zero). The current code
catches this with an "assert(lov < hiv)".
Moreover, after seeing that the first 20 byte of the key are
the same, we will try to establish a value from the 21st
byte. Which is nonsensical.
Instead, we can detect the case that we are in a run of
duplicates, and simply do a final comparison against any one
of them (since they are all the same, it does not matter
which). If the keys match, we have found our entry (or one
of them, anyway). If not, then we know that we do not need
to look further, as we must be in a run of the duplicate
key.
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-24 02:02:25 +02:00
|
|
|
. ./test-lib.sh
|
|
|
|
. "$TEST_DIRECTORY"/lib-pack.sh
|
|
|
|
|
2020-07-30 01:14:16 +02:00
|
|
|
test_expect_success 'setup' '
|
|
|
|
test_oid_cache <<-EOF
|
|
|
|
lo_oid sha1:e68fe8129b546b101aee9510c5328e7f21ca1d18
|
|
|
|
lo_oid sha256:471819e8c52bf11513f100b2810a8aa0622d5cd3d1c913758a071dd4b3bad8fe
|
|
|
|
|
|
|
|
missing_oid sha1:e69d000000000000000000000000000000000000
|
|
|
|
missing_oid sha256:4720000000000000000000000000000000000000000000000000000000000000
|
|
|
|
EOF
|
|
|
|
'
|
2018-05-13 04:24:20 +02:00
|
|
|
|
sha1-lookup: handle duplicate keys with GIT_USE_LOOKUP
The sha1_entry_pos function tries to be smart about
selecting the middle of a range for its binary search by
looking at the value differences between the "lo" and "hi"
constraints. However, it is unable to cope with entries with
duplicate keys in the sorted list.
We may hit a point in the search where both our "lo" and
"hi" point to the same key. In this case, the range of
values between our endpoints is 0, and trying to scale the
difference between our key and the endpoints over that range
is undefined (i.e., divide by zero). The current code
catches this with an "assert(lov < hiv)".
Moreover, after seeing that the first 20 byte of the key are
the same, we will try to establish a value from the 21st
byte. Which is nonsensical.
Instead, we can detect the case that we are in a run of
duplicates, and simply do a final comparison against any one
of them (since they are all the same, it does not matter
which). If the keys match, we have found our entry (or one
of them, anyway). If not, then we know that we do not need
to look further, as we must be in a run of the duplicate
key.
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-24 02:02:25 +02:00
|
|
|
# The sha1s we have in our pack. It's important that these have the same
|
|
|
|
# starting byte, so that they end up in the same fanout section of the index.
|
|
|
|
# That lets us make sure we are exercising the binary search with both sets.
|
2020-07-30 01:14:16 +02:00
|
|
|
LO_SHA1=$(test_oid lo_oid)
|
|
|
|
HI_SHA1=$EMPTY_BLOB
|
sha1-lookup: handle duplicate keys with GIT_USE_LOOKUP
The sha1_entry_pos function tries to be smart about
selecting the middle of a range for its binary search by
looking at the value differences between the "lo" and "hi"
constraints. However, it is unable to cope with entries with
duplicate keys in the sorted list.
We may hit a point in the search where both our "lo" and
"hi" point to the same key. In this case, the range of
values between our endpoints is 0, and trying to scale the
difference between our key and the endpoints over that range
is undefined (i.e., divide by zero). The current code
catches this with an "assert(lov < hiv)".
Moreover, after seeing that the first 20 byte of the key are
the same, we will try to establish a value from the 21st
byte. Which is nonsensical.
Instead, we can detect the case that we are in a run of
duplicates, and simply do a final comparison against any one
of them (since they are all the same, it does not matter
which). If the keys match, we have found our entry (or one
of them, anyway). If not, then we know that we do not need
to look further, as we must be in a run of the duplicate
key.
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-24 02:02:25 +02:00
|
|
|
|
|
|
|
# And here's a "missing sha1" which will produce failed lookups. It must also
|
|
|
|
# be in the same fanout section, and should be between the two (so that during
|
|
|
|
# our binary search, we are sure to end up looking at one or the other of the
|
|
|
|
# duplicate runs).
|
2020-07-30 01:14:16 +02:00
|
|
|
MISSING_SHA1=$(test_oid missing_oid)
|
sha1-lookup: handle duplicate keys with GIT_USE_LOOKUP
The sha1_entry_pos function tries to be smart about
selecting the middle of a range for its binary search by
looking at the value differences between the "lo" and "hi"
constraints. However, it is unable to cope with entries with
duplicate keys in the sorted list.
We may hit a point in the search where both our "lo" and
"hi" point to the same key. In this case, the range of
values between our endpoints is 0, and trying to scale the
difference between our key and the endpoints over that range
is undefined (i.e., divide by zero). The current code
catches this with an "assert(lov < hiv)".
Moreover, after seeing that the first 20 byte of the key are
the same, we will try to establish a value from the 21st
byte. Which is nonsensical.
Instead, we can detect the case that we are in a run of
duplicates, and simply do a final comparison against any one
of them (since they are all the same, it does not matter
which). If the keys match, we have found our entry (or one
of them, anyway). If not, then we know that we do not need
to look further, as we must be in a run of the duplicate
key.
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-24 02:02:25 +02:00
|
|
|
|
|
|
|
# git will never intentionally create packfiles with
|
|
|
|
# duplicate objects, so we have to construct them by hand.
|
|
|
|
#
|
|
|
|
# $1 is the name of the packfile to create
|
|
|
|
#
|
|
|
|
# $2 is the number of times to duplicate each object
|
|
|
|
create_pack () {
|
|
|
|
pack_header "$((2 * $2))" >"$1" &&
|
|
|
|
for i in $(test_seq 1 "$2"); do
|
|
|
|
pack_obj $LO_SHA1 &&
|
|
|
|
pack_obj $HI_SHA1
|
|
|
|
done >>"$1" &&
|
|
|
|
pack_trailer "$1"
|
|
|
|
}
|
|
|
|
|
|
|
|
# double-check that create_pack actually works
|
|
|
|
test_expect_success 'pack with no duplicates' '
|
|
|
|
create_pack no-dups.pack 1 &&
|
|
|
|
git index-pack --stdin <no-dups.pack
|
|
|
|
'
|
|
|
|
|
|
|
|
test_expect_success 'index-pack will allow duplicate objects by default' '
|
|
|
|
clear_packs &&
|
|
|
|
create_pack dups.pack 100 &&
|
|
|
|
git index-pack --stdin <dups.pack
|
|
|
|
'
|
|
|
|
|
|
|
|
test_expect_success 'create batch-check test vectors' '
|
|
|
|
cat >input <<-EOF &&
|
|
|
|
$LO_SHA1
|
|
|
|
$HI_SHA1
|
|
|
|
$MISSING_SHA1
|
|
|
|
EOF
|
|
|
|
cat >expect <<-EOF
|
|
|
|
$LO_SHA1 blob 2
|
|
|
|
$HI_SHA1 blob 0
|
|
|
|
$MISSING_SHA1 missing
|
|
|
|
EOF
|
|
|
|
'
|
|
|
|
|
sha1_file: drop experimental GIT_USE_LOOKUP search
Long ago in 628522ec14 (sha1-lookup: more memory efficient
search in sorted list of SHA-1, 2007-12-29) we added
sha1_entry_pos(), a binary search that uses the uniform
distribution of sha1s to scale the selection of mid-points.
As this was a performance experiment, we tied it to the
GIT_USE_LOOKUP environment variable and never enabled it by
default.
This code was successful in reducing the number of steps in
each search. But the overhead of the scaling ends up making
it slower when the cache is warm. Here are best-of-five
timings for running rev-list on linux.git, which will have
to look up every object:
$ time git rev-list --objects --all >/dev/null
real 0m35.357s
user 0m35.016s
sys 0m0.340s
$ time GIT_USE_LOOKUP=1 git rev-list --objects --all >/dev/null
real 0m37.364s
user 0m37.045s
sys 0m0.316s
The USE_LOOKUP version might have more benefit on a cold
cache, as the time to fault in each page would dominate. But
that would be for a single lookup. In practice, most
operations tend to look up many objects, and the whole pack
.idx will end up warm.
It's possible that the code could be better optimized to
compete with a naive binary search for the warm-cache case,
and we could have the best of both worlds. But over the
years nobody has done so, and this is largely dead code that
is rarely run outside of the test suite. Let's drop it in
the name of simplicity.
This lets us remove sha1_entry_pos() entirely, as the .idx
lookup code was the only caller. Note that sha1-lookup.c
still contains sha1_pos(), which differs from
sha1_entry_pos() in two ways:
- it has a different interface; it uses a function pointer
to access sha1 entries rather than a size/offset pair
describing the table's memory layout
- it only scales the initial selection of "mi", rather
than each iteration of the search
We can't get rid of this function, as it's called from
several places. It may be that we could replace it with a
simple binary search, but that's out of scope for this patch
(and would need benchmarking).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-08-09 12:14:32 +02:00
|
|
|
test_expect_success 'lookup in duplicated pack' '
|
sha1-lookup: handle duplicate keys with GIT_USE_LOOKUP
The sha1_entry_pos function tries to be smart about
selecting the middle of a range for its binary search by
looking at the value differences between the "lo" and "hi"
constraints. However, it is unable to cope with entries with
duplicate keys in the sorted list.
We may hit a point in the search where both our "lo" and
"hi" point to the same key. In this case, the range of
values between our endpoints is 0, and trying to scale the
difference between our key and the endpoints over that range
is undefined (i.e., divide by zero). The current code
catches this with an "assert(lov < hiv)".
Moreover, after seeing that the first 20 byte of the key are
the same, we will try to establish a value from the 21st
byte. Which is nonsensical.
Instead, we can detect the case that we are in a run of
duplicates, and simply do a final comparison against any one
of them (since they are all the same, it does not matter
which). If the keys match, we have found our entry (or one
of them, anyway). If not, then we know that we do not need
to look further, as we must be in a run of the duplicate
key.
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-24 02:02:25 +02:00
|
|
|
git cat-file --batch-check <input >actual &&
|
|
|
|
test_cmp expect actual
|
|
|
|
'
|
|
|
|
|
2013-09-04 11:01:53 +02:00
|
|
|
test_expect_success 'index-pack can reject packs with duplicates' '
|
|
|
|
clear_packs &&
|
|
|
|
create_pack dups.pack 2 &&
|
|
|
|
test_must_fail git index-pack --strict --stdin <dups.pack &&
|
|
|
|
test_expect_code 1 git cat-file -e $LO_SHA1
|
|
|
|
'
|
|
|
|
|
sha1-lookup: handle duplicate keys with GIT_USE_LOOKUP
The sha1_entry_pos function tries to be smart about
selecting the middle of a range for its binary search by
looking at the value differences between the "lo" and "hi"
constraints. However, it is unable to cope with entries with
duplicate keys in the sorted list.
We may hit a point in the search where both our "lo" and
"hi" point to the same key. In this case, the range of
values between our endpoints is 0, and trying to scale the
difference between our key and the endpoints over that range
is undefined (i.e., divide by zero). The current code
catches this with an "assert(lov < hiv)".
Moreover, after seeing that the first 20 byte of the key are
the same, we will try to establish a value from the 21st
byte. Which is nonsensical.
Instead, we can detect the case that we are in a run of
duplicates, and simply do a final comparison against any one
of them (since they are all the same, it does not matter
which). If the keys match, we have found our entry (or one
of them, anyway). If not, then we know that we do not need
to look further, as we must be in a run of the duplicate
key.
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-24 02:02:25 +02:00
|
|
|
test_done
|