sha1-lookup: handle duplicate keys with GIT_USE_LOOKUP
The sha1_entry_pos function tries to be smart about
selecting the middle of a range for its binary search by
looking at the value differences between the "lo" and "hi"
constraints. However, it is unable to cope with entries with
duplicate keys in the sorted list.
We may hit a point in the search where both our "lo" and
"hi" point to the same key. In this case, the range of
values between our endpoints is 0, and trying to scale the
difference between our key and the endpoints over that range
is undefined (i.e., divide by zero). The current code
catches this with an "assert(lov < hiv)".
Moreover, after seeing that the first 20 byte of the key are
the same, we will try to establish a value from the 21st
byte. Which is nonsensical.
Instead, we can detect the case that we are in a run of
duplicates, and simply do a final comparison against any one
of them (since they are all the same, it does not matter
which). If the keys match, we have found our entry (or one
of them, anyway). If not, then we know that we do not need
to look further, as we must be in a run of the duplicate
key.
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-24 02:02:25 +02:00
|
|
|
# Support routines for hand-crafting weird or malicious packs.
|
|
|
|
#
|
|
|
|
# You can make a complete pack like:
|
|
|
|
#
|
|
|
|
# pack_header 2 >foo.pack &&
|
|
|
|
# pack_obj e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 >>foo.pack &&
|
|
|
|
# pack_obj e68fe8129b546b101aee9510c5328e7f21ca1d18 >>foo.pack &&
|
|
|
|
# pack_trailer foo.pack
|
|
|
|
|
|
|
|
# Print the big-endian 4-byte octal representation of $1
|
|
|
|
uint32_octal () {
|
|
|
|
n=$1
|
2013-10-31 12:51:32 +01:00
|
|
|
printf '\\%o' $(($n / 16777216)); n=$((n % 16777216))
|
|
|
|
printf '\\%o' $(($n / 65536)); n=$((n % 65536))
|
|
|
|
printf '\\%o' $(($n / 256)); n=$((n % 256))
|
|
|
|
printf '\\%o' $(($n ));
|
sha1-lookup: handle duplicate keys with GIT_USE_LOOKUP
The sha1_entry_pos function tries to be smart about
selecting the middle of a range for its binary search by
looking at the value differences between the "lo" and "hi"
constraints. However, it is unable to cope with entries with
duplicate keys in the sorted list.
We may hit a point in the search where both our "lo" and
"hi" point to the same key. In this case, the range of
values between our endpoints is 0, and trying to scale the
difference between our key and the endpoints over that range
is undefined (i.e., divide by zero). The current code
catches this with an "assert(lov < hiv)".
Moreover, after seeing that the first 20 byte of the key are
the same, we will try to establish a value from the 21st
byte. Which is nonsensical.
Instead, we can detect the case that we are in a run of
duplicates, and simply do a final comparison against any one
of them (since they are all the same, it does not matter
which). If the keys match, we have found our entry (or one
of them, anyway). If not, then we know that we do not need
to look further, as we must be in a run of the duplicate
key.
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-24 02:02:25 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
# Print the big-endian 4-byte binary representation of $1
|
|
|
|
uint32_binary () {
|
|
|
|
printf "$(uint32_octal "$1")"
|
|
|
|
}
|
|
|
|
|
|
|
|
# Print a pack header, version 2, for a pack with $1 objects
|
|
|
|
pack_header () {
|
|
|
|
printf 'PACK' &&
|
|
|
|
printf '\0\0\0\2' &&
|
|
|
|
uint32_binary "$1"
|
|
|
|
}
|
|
|
|
|
|
|
|
# Print the pack data for object $1, as a delta against object $2 (or as a full
|
|
|
|
# object if $2 is missing or empty). The output is suitable for including
|
|
|
|
# directly in the packfile, and represents the entirety of the object entry.
|
|
|
|
# Doing this on the fly (especially picking your deltas) is quite tricky, so we
|
|
|
|
# have hardcoded some well-known objects. See the case statements below for the
|
|
|
|
# complete list.
|
|
|
|
pack_obj () {
|
|
|
|
case "$1" in
|
|
|
|
# empty blob
|
2020-02-07 01:52:34 +01:00
|
|
|
$EMPTY_BLOB)
|
sha1-lookup: handle duplicate keys with GIT_USE_LOOKUP
The sha1_entry_pos function tries to be smart about
selecting the middle of a range for its binary search by
looking at the value differences between the "lo" and "hi"
constraints. However, it is unable to cope with entries with
duplicate keys in the sorted list.
We may hit a point in the search where both our "lo" and
"hi" point to the same key. In this case, the range of
values between our endpoints is 0, and trying to scale the
difference between our key and the endpoints over that range
is undefined (i.e., divide by zero). The current code
catches this with an "assert(lov < hiv)".
Moreover, after seeing that the first 20 byte of the key are
the same, we will try to establish a value from the 21st
byte. Which is nonsensical.
Instead, we can detect the case that we are in a run of
duplicates, and simply do a final comparison against any one
of them (since they are all the same, it does not matter
which). If the keys match, we have found our entry (or one
of them, anyway). If not, then we know that we do not need
to look further, as we must be in a run of the duplicate
key.
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-24 02:02:25 +02:00
|
|
|
case "$2" in
|
|
|
|
'')
|
|
|
|
printf '\060\170\234\003\0\0\0\0\1'
|
|
|
|
return
|
|
|
|
;;
|
|
|
|
esac
|
|
|
|
;;
|
|
|
|
|
|
|
|
# blob containing "\7\76"
|
2020-02-07 01:52:34 +01:00
|
|
|
$(test_oid packlib_7_76))
|
sha1-lookup: handle duplicate keys with GIT_USE_LOOKUP
The sha1_entry_pos function tries to be smart about
selecting the middle of a range for its binary search by
looking at the value differences between the "lo" and "hi"
constraints. However, it is unable to cope with entries with
duplicate keys in the sorted list.
We may hit a point in the search where both our "lo" and
"hi" point to the same key. In this case, the range of
values between our endpoints is 0, and trying to scale the
difference between our key and the endpoints over that range
is undefined (i.e., divide by zero). The current code
catches this with an "assert(lov < hiv)".
Moreover, after seeing that the first 20 byte of the key are
the same, we will try to establish a value from the 21st
byte. Which is nonsensical.
Instead, we can detect the case that we are in a run of
duplicates, and simply do a final comparison against any one
of them (since they are all the same, it does not matter
which). If the keys match, we have found our entry (or one
of them, anyway). If not, then we know that we do not need
to look further, as we must be in a run of the duplicate
key.
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-24 02:02:25 +02:00
|
|
|
case "$2" in
|
|
|
|
'')
|
|
|
|
printf '\062\170\234\143\267\3\0\0\116\0\106'
|
|
|
|
return
|
|
|
|
;;
|
add tests for indexing packs with delta cycles
If we receive a broken or malicious pack from a remote, we
will feed it to index-pack. As index-pack processes the
objects as a stream, reconstructing and hashing each object
to get its name, it is not very susceptible to doing the
wrong with bad data (it simply notices that the data is
bogus and aborts).
However, one question raised on the list is whether it could
be susceptible to problems during the delta-resolution
phase. In particular, can a cycle in the packfile deltas
cause us to go into an infinite loop or cause any other
problem?
The answer is no.
We cannot have a cycle of delta-base offsets, because they
go only in one direction (the OFS_DELTA object mentions its
base by an offset towards the beginning of the file, and we
explicitly reject negative offsets).
We can have a cycle of REF_DELTA objects, which refer to
base objects by sha1 name. However, index-pack does not know
these sha1 names ahead of time; it has to reconstruct the
objects to get their names, and it cannot do so if there is
a delta cycle (in other words, it does not even realize
there is a cycle, but only that there are items that cannot
be resolved).
Even though we can reason out that index-pack should handle
this fine, let's add a few tests to make sure it behaves
correctly.
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-24 02:02:31 +02:00
|
|
|
01d7713666f4de822776c7622c10f1b07de280dc)
|
|
|
|
printf '\165\1\327\161\66\146\364\336\202\47\166' &&
|
|
|
|
printf '\307\142\54\20\361\260\175\342\200\334\170' &&
|
|
|
|
printf '\234\143\142\142\142\267\003\0\0\151\0\114'
|
|
|
|
return
|
|
|
|
;;
|
2020-02-07 01:52:34 +01:00
|
|
|
37c8e2c15bb22b912e59b43fd51a4f7e9465ed0b5084c5a1411d991cbe630683)
|
|
|
|
printf '\165\67\310\342\301\133\262\53\221\56\131' &&
|
|
|
|
printf '\264\77\325\32\117\176\224\145\355\13\120' &&
|
|
|
|
printf '\204\305\241\101\35\231\34\276\143\6\203\170' &&
|
|
|
|
printf '\234\143\142\142\142\267\003\0\0\151\0\114'
|
|
|
|
return
|
|
|
|
;;
|
add tests for indexing packs with delta cycles
If we receive a broken or malicious pack from a remote, we
will feed it to index-pack. As index-pack processes the
objects as a stream, reconstructing and hashing each object
to get its name, it is not very susceptible to doing the
wrong with bad data (it simply notices that the data is
bogus and aborts).
However, one question raised on the list is whether it could
be susceptible to problems during the delta-resolution
phase. In particular, can a cycle in the packfile deltas
cause us to go into an infinite loop or cause any other
problem?
The answer is no.
We cannot have a cycle of delta-base offsets, because they
go only in one direction (the OFS_DELTA object mentions its
base by an offset towards the beginning of the file, and we
explicitly reject negative offsets).
We can have a cycle of REF_DELTA objects, which refer to
base objects by sha1 name. However, index-pack does not know
these sha1 names ahead of time; it has to reconstruct the
objects to get their names, and it cannot do so if there is
a delta cycle (in other words, it does not even realize
there is a cycle, but only that there are items that cannot
be resolved).
Even though we can reason out that index-pack should handle
this fine, let's add a few tests to make sure it behaves
correctly.
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-24 02:02:31 +02:00
|
|
|
esac
|
|
|
|
;;
|
|
|
|
|
|
|
|
# blob containing "\7\0"
|
2020-02-07 01:52:34 +01:00
|
|
|
$(test_oid packlib_7_0))
|
add tests for indexing packs with delta cycles
If we receive a broken or malicious pack from a remote, we
will feed it to index-pack. As index-pack processes the
objects as a stream, reconstructing and hashing each object
to get its name, it is not very susceptible to doing the
wrong with bad data (it simply notices that the data is
bogus and aborts).
However, one question raised on the list is whether it could
be susceptible to problems during the delta-resolution
phase. In particular, can a cycle in the packfile deltas
cause us to go into an infinite loop or cause any other
problem?
The answer is no.
We cannot have a cycle of delta-base offsets, because they
go only in one direction (the OFS_DELTA object mentions its
base by an offset towards the beginning of the file, and we
explicitly reject negative offsets).
We can have a cycle of REF_DELTA objects, which refer to
base objects by sha1 name. However, index-pack does not know
these sha1 names ahead of time; it has to reconstruct the
objects to get their names, and it cannot do so if there is
a delta cycle (in other words, it does not even realize
there is a cycle, but only that there are items that cannot
be resolved).
Even though we can reason out that index-pack should handle
this fine, let's add a few tests to make sure it behaves
correctly.
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-24 02:02:31 +02:00
|
|
|
case "$2" in
|
|
|
|
'')
|
|
|
|
printf '\062\170\234\143\147\0\0\0\20\0\10'
|
|
|
|
return
|
|
|
|
;;
|
|
|
|
e68fe8129b546b101aee9510c5328e7f21ca1d18)
|
|
|
|
printf '\165\346\217\350\22\233\124\153\20\32\356' &&
|
|
|
|
printf '\225\20\305\62\216\177\41\312\35\30\170\234' &&
|
|
|
|
printf '\143\142\142\142\147\0\0\0\53\0\16'
|
|
|
|
return
|
|
|
|
;;
|
2020-02-07 01:52:34 +01:00
|
|
|
5d8e6fc40f2dab00e6983a48523fe57e621f46434cb58dbd4422fba03380d886)
|
|
|
|
printf '\165\135\216\157\304\17\55\253\0\346\230\72' &&
|
|
|
|
printf '\110\122\77\345\176\142\37\106\103\114\265' &&
|
|
|
|
printf '\215\275\104\42\373\240\63\200\330\206\170\234' &&
|
|
|
|
printf '\143\142\142\142\147\0\0\0\53\0\16'
|
|
|
|
return
|
|
|
|
;;
|
sha1-lookup: handle duplicate keys with GIT_USE_LOOKUP
The sha1_entry_pos function tries to be smart about
selecting the middle of a range for its binary search by
looking at the value differences between the "lo" and "hi"
constraints. However, it is unable to cope with entries with
duplicate keys in the sorted list.
We may hit a point in the search where both our "lo" and
"hi" point to the same key. In this case, the range of
values between our endpoints is 0, and trying to scale the
difference between our key and the endpoints over that range
is undefined (i.e., divide by zero). The current code
catches this with an "assert(lov < hiv)".
Moreover, after seeing that the first 20 byte of the key are
the same, we will try to establish a value from the 21st
byte. Which is nonsensical.
Instead, we can detect the case that we are in a run of
duplicates, and simply do a final comparison against any one
of them (since they are all the same, it does not matter
which). If the keys match, we have found our entry (or one
of them, anyway). If not, then we know that we do not need
to look further, as we must be in a run of the duplicate
key.
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-24 02:02:25 +02:00
|
|
|
esac
|
|
|
|
;;
|
2020-07-30 01:14:16 +02:00
|
|
|
# blob containing "\3\326"
|
|
|
|
471819e8c52bf11513f100b2810a8aa0622d5cd3d1c913758a071dd4b3bad8fe)
|
|
|
|
case "$2" in
|
|
|
|
'')
|
|
|
|
printf '\062\170\234\143\276\006\000\000\336\000\332'
|
|
|
|
return
|
|
|
|
;;
|
|
|
|
esac
|
sha1-lookup: handle duplicate keys with GIT_USE_LOOKUP
The sha1_entry_pos function tries to be smart about
selecting the middle of a range for its binary search by
looking at the value differences between the "lo" and "hi"
constraints. However, it is unable to cope with entries with
duplicate keys in the sorted list.
We may hit a point in the search where both our "lo" and
"hi" point to the same key. In this case, the range of
values between our endpoints is 0, and trying to scale the
difference between our key and the endpoints over that range
is undefined (i.e., divide by zero). The current code
catches this with an "assert(lov < hiv)".
Moreover, after seeing that the first 20 byte of the key are
the same, we will try to establish a value from the 21st
byte. Which is nonsensical.
Instead, we can detect the case that we are in a run of
duplicates, and simply do a final comparison against any one
of them (since they are all the same, it does not matter
which). If the keys match, we have found our entry (or one
of them, anyway). If not, then we know that we do not need
to look further, as we must be in a run of the duplicate
key.
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-24 02:02:25 +02:00
|
|
|
esac
|
|
|
|
|
index-pack: check .gitmodules files with --strict
Now that the internal fsck code has all of the plumbing we
need, we can start checking incoming .gitmodules files.
Naively, it seems like we would just need to add a call to
fsck_finish() after we've processed all of the objects. And
that would be enough to cover the initial test included
here. But there are two extra bits:
1. We currently don't bother calling fsck_object() at all
for blobs, since it has traditionally been a noop. We'd
actually catch these blobs in fsck_finish() at the end,
but it's more efficient to check them when we already
have the object loaded in memory.
2. The second pass done by fsck_finish() needs to access
the objects, but we're actually indexing the pack in
this process. In theory we could give the fsck code a
special callback for accessing the in-pack data, but
it's actually quite tricky:
a. We don't have an internal efficient index mapping
oids to packfile offsets. We only generate it on
the fly as part of writing out the .idx file.
b. We'd still have to reconstruct deltas, which means
we'd basically have to replicate all of the
reading logic in packfile.c.
Instead, let's avoid running fsck_finish() until after
we've written out the .idx file, and then just add it
to our internal packed_git list.
This does mean that the objects are "in the repository"
before we finish our fsck checks. But unpack-objects
already exhibits this same behavior, and it's an
acceptable tradeoff here for the same reason: the
quarantine mechanism means that pushes will be
fully protected.
In addition to a basic push test in t7415, we add a sneaky
pack that reverses the usual object order in the pack,
requiring that index-pack access the tree and blob during
the "finish" step.
This already works for unpack-objects (since it will have
written out loose objects), but we'll check it with this
sneaky pack for good measure.
Signed-off-by: Jeff King <peff@peff.net>
2018-05-05 01:45:01 +02:00
|
|
|
# If it's not a delta, we can convince pack-objects to generate a pack
|
|
|
|
# with just our entry, and then strip off the header (12 bytes) and
|
|
|
|
# trailer (20 bytes).
|
|
|
|
if test -z "$2"
|
|
|
|
then
|
|
|
|
echo "$1" | git pack-objects --stdout >pack_obj.tmp &&
|
|
|
|
size=$(wc -c <pack_obj.tmp) &&
|
2020-02-07 01:52:34 +01:00
|
|
|
dd if=pack_obj.tmp bs=1 count=$((size - $(test_oid rawsz) - 12)) skip=12 &&
|
index-pack: check .gitmodules files with --strict
Now that the internal fsck code has all of the plumbing we
need, we can start checking incoming .gitmodules files.
Naively, it seems like we would just need to add a call to
fsck_finish() after we've processed all of the objects. And
that would be enough to cover the initial test included
here. But there are two extra bits:
1. We currently don't bother calling fsck_object() at all
for blobs, since it has traditionally been a noop. We'd
actually catch these blobs in fsck_finish() at the end,
but it's more efficient to check them when we already
have the object loaded in memory.
2. The second pass done by fsck_finish() needs to access
the objects, but we're actually indexing the pack in
this process. In theory we could give the fsck code a
special callback for accessing the in-pack data, but
it's actually quite tricky:
a. We don't have an internal efficient index mapping
oids to packfile offsets. We only generate it on
the fly as part of writing out the .idx file.
b. We'd still have to reconstruct deltas, which means
we'd basically have to replicate all of the
reading logic in packfile.c.
Instead, let's avoid running fsck_finish() until after
we've written out the .idx file, and then just add it
to our internal packed_git list.
This does mean that the objects are "in the repository"
before we finish our fsck checks. But unpack-objects
already exhibits this same behavior, and it's an
acceptable tradeoff here for the same reason: the
quarantine mechanism means that pushes will be
fully protected.
In addition to a basic push test in t7415, we add a sneaky
pack that reverses the usual object order in the pack,
requiring that index-pack access the tree and blob during
the "finish" step.
This already works for unpack-objects (since it will have
written out loose objects), but we'll check it with this
sneaky pack for good measure.
Signed-off-by: Jeff King <peff@peff.net>
2018-05-05 01:45:01 +02:00
|
|
|
rm -f pack_obj.tmp
|
|
|
|
return
|
|
|
|
fi
|
|
|
|
|
sha1-lookup: handle duplicate keys with GIT_USE_LOOKUP
The sha1_entry_pos function tries to be smart about
selecting the middle of a range for its binary search by
looking at the value differences between the "lo" and "hi"
constraints. However, it is unable to cope with entries with
duplicate keys in the sorted list.
We may hit a point in the search where both our "lo" and
"hi" point to the same key. In this case, the range of
values between our endpoints is 0, and trying to scale the
difference between our key and the endpoints over that range
is undefined (i.e., divide by zero). The current code
catches this with an "assert(lov < hiv)".
Moreover, after seeing that the first 20 byte of the key are
the same, we will try to establish a value from the 21st
byte. Which is nonsensical.
Instead, we can detect the case that we are in a run of
duplicates, and simply do a final comparison against any one
of them (since they are all the same, it does not matter
which). If the keys match, we have found our entry (or one
of them, anyway). If not, then we know that we do not need
to look further, as we must be in a run of the duplicate
key.
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-24 02:02:25 +02:00
|
|
|
echo >&2 "BUG: don't know how to print $1${2:+ (from $2)}"
|
|
|
|
return 1
|
|
|
|
}
|
|
|
|
|
|
|
|
# Compute and append pack trailer to "$1"
|
|
|
|
pack_trailer () {
|
2020-02-07 01:52:34 +01:00
|
|
|
test-tool $(test_oid algo) -b <"$1" >trailer.tmp &&
|
sha1-lookup: handle duplicate keys with GIT_USE_LOOKUP
The sha1_entry_pos function tries to be smart about
selecting the middle of a range for its binary search by
looking at the value differences between the "lo" and "hi"
constraints. However, it is unable to cope with entries with
duplicate keys in the sorted list.
We may hit a point in the search where both our "lo" and
"hi" point to the same key. In this case, the range of
values between our endpoints is 0, and trying to scale the
difference between our key and the endpoints over that range
is undefined (i.e., divide by zero). The current code
catches this with an "assert(lov < hiv)".
Moreover, after seeing that the first 20 byte of the key are
the same, we will try to establish a value from the 21st
byte. Which is nonsensical.
Instead, we can detect the case that we are in a run of
duplicates, and simply do a final comparison against any one
of them (since they are all the same, it does not matter
which). If the keys match, we have found our entry (or one
of them, anyway). If not, then we know that we do not need
to look further, as we must be in a run of the duplicate
key.
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-24 02:02:25 +02:00
|
|
|
cat trailer.tmp >>"$1" &&
|
|
|
|
rm -f trailer.tmp
|
|
|
|
}
|
|
|
|
|
|
|
|
# Remove any existing packs to make sure that
|
|
|
|
# whatever we index next will be the pack that we
|
|
|
|
# actually use.
|
|
|
|
clear_packs () {
|
|
|
|
rm -f .git/objects/pack/*
|
|
|
|
}
|
2020-02-07 01:52:34 +01:00
|
|
|
|
|
|
|
test_oid_cache <<-EOF
|
|
|
|
packlib_7_0 sha1:01d7713666f4de822776c7622c10f1b07de280dc
|
|
|
|
packlib_7_0 sha256:37c8e2c15bb22b912e59b43fd51a4f7e9465ed0b5084c5a1411d991cbe630683
|
|
|
|
|
|
|
|
packlib_7_76 sha1:e68fe8129b546b101aee9510c5328e7f21ca1d18
|
|
|
|
packlib_7_76 sha256:5d8e6fc40f2dab00e6983a48523fe57e621f46434cb58dbd4422fba03380d886
|
|
|
|
EOF
|