The pack revindex stores the offsets of the objects in the
pack in sorted order, allowing us to easily find the on-disk
size of each object. To compute it, we populate an array
with the offsets from the sha1-sorted idx file, and then use
qsort to order it by offsets.
That does O(n log n) offset comparisons, and profiling shows
that we spend most of our time in cmp_offset. However, since
we are sorting on a simple off_t, we can use numeric sorts
that perform better. A radix sort can run in O(k*n), where k
is the number of "digits" in our number. For a 64-bit off_t,
using 16-bit "digits" gives us k=4.
On the linux.git repo, with about 3M objects to sort, this
yields a 400% speedup. Here are the best-of-five numbers for
running
echo HEAD | git cat-file --batch-check="%(objectsize:disk)
on a fully packed repository, which is dominated by time
spent building the pack revindex:
before after
real 0m0.834s 0m0.204s
user 0m0.788s 0m0.164s
sys 0m0.040s 0m0.036s
This matches our algorithmic expectations. log(3M) is ~21.5,
so a traditional sort is ~21.5n. Our radix sort runs in k*n,
where k is the number of radix digits. In the worst case,
this is k=4 for a 64-bit off_t, but we can quit early when
the largest value to be sorted is smaller. For any
repository under 4G, k=2. Our algorithm makes two passes
over the list per radix digit, so we end up with 4n. That
should yield ~5.3x speedup. We see 4x here; the difference
is probably due to the extra bucket book-keeping the radix
sort has to do.
On a smaller repo, the difference is less impressive, as
log(n) is smaller. For git.git, with 173K objects (but still
k=2), we see a 2.7x improvement:
before after
real 0m0.046s 0m0.017s
user 0m0.036s 0m0.012s
sys 0m0.008s 0m0.000s
On even tinier repos (e.g., a few hundred objects), the
speedup goes away entirely, as the small advantage of the
radix sort gets erased by the book-keeping costs (and at
those sizes, the cost to generate the the rev-index gets
lost in the noise anyway).
Signed-off-by: Jeff King <peff@peff.net>
Reviewed-by: Brandon Casey <drafnel@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A packfile may have up to 2^32-1 objects in it, so the
"right" data type to use is uint32_t. We currently use a
signed int, which means that we may behave incorrectly for
packfiles with more than 2^31-1 objects on 32-bit systems.
Nobody has noticed because having 2^31 objects is pretty
insane. The linux.git repo has on the order of 2^22 objects,
which is hundreds of times smaller than necessary to trigger
the bug.
Let's bump this up to an "unsigned". On 32-bit systems, this
gives us the correct data-type, and on 64-bit systems, it is
probably more efficient to use the native "unsigned" than a
true uint32_t.
While we're at it, we can fix the binary search not to
overflow in such a case if our unsigned is 32 bits.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If we get an input line to --batch or --batch-check that
looks like "HEAD foo bar", we will currently feed the whole
thing to get_sha1(). This means that to use --batch-check
with `rev-list --objects`, one must pre-process the input,
like:
git rev-list --objects HEAD |
cut -d' ' -f1 |
git cat-file --batch-check
Besides being more typing and slightly less efficient to
invoke `cut`, the result loses information: we no longer
know which path each object was found at.
This patch teaches cat-file to split input lines at the
first whitespace. Everything to the left of the whitespace
is considered an object name, and everything to the right is
made available as the %(reset) atom. So you can now do:
git rev-list --objects HEAD |
git cat-file --batch-check='%(objectsize) %(rest)'
to collect object sizes at particular paths.
Even if %(rest) is not used, we always do the whitespace
split (which means you can simply eliminate the `cut`
command from the first example above).
This whitespace split is backwards compatible for any
reasonable input. Object names cannot contain spaces, so any
input with spaces would have resulted in a "missing" line.
The only input hurt is if somebody really expected input of
the form "HEAD is a fine-looking ref!" to fail; it will now
parse HEAD, and make "is a fine-looking ref!" available as
%(rest).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This atom is just like %(objectsize), except that it shows
the on-disk size of the object rather than the object's true
size. In other words, it makes the "disk_size" query of
sha1_object_info_extended available via the command-line.
This can be used for rough attribution of disk usage to
particular refs, though see the caveats in the
documentation.
This patch does not include any tests, as the exact numbers
returned are volatile and subject to zlib and packing
decisions. We cannot even reliably guarantee that the
on-disk size is smaller than the object content (though in
general this should be the case for non-trivial objects).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The `cat-file --batch-check` command can be used to quickly
get information about a large number of objects. However, it
provides a fixed set of information.
This patch adds an optional <format> option to --batch-check
to allow a caller to specify which items they are interested
in, and in which order to output them. This is not very
exciting for now, since we provide the same limited set that
you could already get. However, it opens the door to adding
new format items in the future without breaking backwards
compatibility (or forcing callers to pay the cost to
calculate uninteresting items).
Since the --batch option shares code with --batch-check, it
receives the same feature, though it is less likely to be of
interest there.
The format atom names are chosen to match their counterparts
in for-each-ref. Though we do not (yet) share any code with
for-each-ref's formatter, this keeps the interface as
consistent as possible, and may help later on if the
implementations are unified.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A test that should have failed but didn't revealed a bug that needs
to be corrected.
* jc/t1512-fix:
get_short_sha1(): correctly disambiguate type-limited abbreviation
t1512: correct leftover constants from earlier edition
"git stash save" is not just about "saving" the local changes, but
also is to restore the working tree state to that of HEAD. If you
changed a non-directory into a directory in the local change, you
may have untracked files in that directory, which have to be killed
while doing so, unless you run it with --include-untracked. Teach
the command to detect and error out before spreading the damage.
This needed a small fix to "ls-files --killed".
* pb/stash-refuse-to-kill:
git stash: avoid data loss when "git stash save" kills a directory
treat_directory(): do not declare submodules to be untracked
"git diff" refused to even show difference when core.safecrlf is
set to true (i.e. error out) and there are offending lines in the
working tree files.
* jc/maint-diff-core-safecrlf:
diff: demote core.safecrlf=true to core.safecrlf=warn
"git status" learned status.branch and status.short configuration
variables to use --branch and --short options by default (override
with --no-branch and --no-short options from the command line).
* jg/status-config:
status/commit: make sure --porcelain is not affected by user-facing config
commit: make it work with status.short
status: introduce status.branch to enable --branch by default
status: introduce status.short to enable --short by default
Invocations of "git checkout" used internally by "git rebase" were
counted as "checkout", and affected later "git checkout -" to the
the user to an unexpected place.
* rr/rebase-checkout-reflog:
checkout: respect GIT_REFLOG_ACTION
status: do not depend on rebase reflog messages
t/t2021-checkout-last: "checkout -" should work after a rebase finishes
wt-status: remove unused field in grab_1st_switch_cbdata
t7512: test "detached from" as well
Earlier remote.pushdefault (and per-branch branch.*.pushremote)
were introduced as an additional mechanism to choose what
repository to push into when "git push" did not say it from the
command line, to help people who push to a repository that is
different from where they fetch from. This attempts to finish that
topic by teaching the default mechanism to choose branch in the
remote repository to be updated by such a push.
The 'current', 'matching' and 'nothing' modes (specified by the
push.default configuration variable) extend to such a "triangular"
workflow naturally, but 'upstream' and 'simple' have to be updated.
. 'upstream' is about pushing back to update the branch in the
remote repository that the current branch fetches from and
integrates with, it errors out in a triangular workflow.
. 'simple' is meant to help new people by avoiding mistakes, and
will be the safe default in Git 2.0.
In a non-triangular workflow, it will continue to act as a cross
between 'upstream' and 'current' in that it pushes to the current
branch's @{upstream} only when it is set to the same name as the
current branch (e.g. your 'master' forks from the 'master' from
the central repository).
In a triangular workflow, this series tentatively defines it as
the same as 'current', but we may have to tighten it to avoid
surprises in some way.
* jc/triangle-push-fixup:
t/t5528-push-default: test pushdefault workflows
t/t5528-push-default: generalize test_push_*
push: change `simple` to accommodate triangular workflows
config doc: rewrite push.default section
t/t5528-push-default: remove redundant test_config lines
We currently use an int to tell us whether --batch parsing
is on, and if so, whether we should print the full object
contents. Let's instead factor this into a struct, filled in
by callback, which will make further batch-related options
easy to add.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The regular "git cat-file -p" and "git cat-file blob" code
paths already learned to stream large blobs. Let's do the
same here.
Note that this means we look up the type and size before
making a decision of whether to load the object into memory
or stream (just like the "-p" code path does). That can lead
to extra work, but it should be dwarfed by the cost of
actually accessing the object itself. In my measurements,
there was a 1-2% slowdown when using "--batch" on a large
number of objects.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In modern tests, we typically put output into a file and
compare it with test_cmp. This is nicer than just comparing
via "test", and much shorter than comparing via "test" and
printing a custom message.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
These functions could benefit from the added compile-time
safety of having the compiler check printf arguments.
Unfortunately, we also sometimes pass an empty format string,
which will cause false positives with -Wformat-zero-length.
In this case, that warning is wrong because our function is
not a no-op with an empty format: it may be printing
colorized output along with a trailing newline.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This attribute can help gcc notice when callers forget to
add a NULL sentinel to the end of the function. This is our
first use of the sentinel attribute, but we shouldn't need
to #ifdef for other compilers, as __attribute__ is already a
no-op on non-gcc-compatible compilers.
Suggested-by: Bert Wesarg <bert.wesarg@googlemail.com>
More-Spots-Found-By: Matt Kraai <kraai@ftbfs.org>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
For most of our functions that take printf-like formats, we
use gcc's __attribute__((format)) to get compiler warnings
when the functions are misused. Let's give a few more
functions the same protection.
In most cases, the annotations do not uncover any actual
bugs; the only code change needed is that we passed a size_t
to transfer_debug, which expected an int. Since we expect
the passed-in value to be a relatively small buffer size
(and cast a similar value to int directly below), we can
just cast away the problem.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Instead of using a hand-managed argument array, use argv-array API
to manage dynamically formulated command line.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When coalescing ranges, sort_and_merge_range_set() unconditionally
assumes that the end of a range being folded into a preceding range
should become the end of the coalesced range. This assumption, however,
is invalid when one range is a subset of another. For example, given
ranges 1-5 and 2-3 added via range_set_append_unsafe(),
sort_and_merge_range_set() incorrectly coalesces them to range 1-3
rather than the correct union range 1-5. Fix this bug.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
t4211 attempts to test multiple git-log -L ranges where one range is a
superset of the other, and falsely succeeds because its "expected"
output is incorrect.
Overlapping -L ranges handed to git-log are coalesced by
line-log.c:sort_and_merge_range_set() into a set of non-overlapping,
disjoint ranges. When one range is a subset of another,
sort_and_merge_range_set() should coalesce both ranges to the superset
range, but instead the coalesced range often is incorrectly truncated to
the end of the subset range. For example, ranges 2-8 and 3-4 are
coalesced incorrectly to 2-4.
One can observe this incorrect behavior with git-log -L using the test
repository created by t4211. The superset/subset ranges t4211 employs
are 4-$ and 8-12 (where $ represents end-of-file). The coalesced range
should be 4-$. Manually invoking git-log with the same ranges the test
employs, we see:
% git log -L 4:a.c simple |
awk '/^commit [0-9a-f]{40}/ { print substr($2,1,7) }'
4659538
100b61a
39b6eb2
a6eb826
f04fb20
de4c48a
% git log -L 8,12:a.c simple | awk ...
f04fb20
de4c48a
% git log -L 4:a.c -L 8,12:a.c simple | awk ...
a6eb826
f04fb20
de4c48a
This last output is incorrect. 8-12 is a subset of 4-$, hence the output
of the coalesced range should be the same as the 4-$ output shown first.
In fact, the above incorrect output is the truncated bogus range 4-12:
% git log -L 4,12:a.c simple | awk ...
a6eb826
f04fb20
de4c48a
Fix the test to correctly fail in the presence of the
sort_and_merge_range_set() coalescing bug. Do so by changing the
"expected" output to the commits mentioned in the 4-$ output above.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
I attempted to make index_state->cache[] a "const struct cache_entry **"
to find out how existing entries in index are modified and where. The
question I have is what do we do if we really need to keep track of on-disk
changes in the index. The result is
- diff-lib.c: setting CE_UPTODATE
- name-hash.c: setting CE_HASHED
- preload-index.c, read-cache.c, unpack-trees.c and
builtin/update-index: obvious
- entry.c: write_entry() may refresh the checked out entry via
fill_stat_cache_info(). This causes "non-const struct cache_entry
*" in builtin/apply.c, builtin/checkout-index.c and
builtin/checkout.c
- builtin/ls-files.c: --with-tree changes stagemask and may set
CE_UPDATE
Of these, write_entry() and its call sites are probably most
interesting because it modifies on-disk info. But this is stat info
and can be retrieved via refresh, at least for porcelain
commands. Other just uses ce_flags for local purposes.
So, keeping track of "dirty" entries is just a matter of setting a
flag in index modification functions exposed by read-cache.c. Except
unpack-trees, the rest of the code base does not do anything funny
behind read-cache's back.
The actual patch is less valueable than the summary above. But if
anyone wants to re-identify the above sites. Applying this patch, then
this:
diff --git a/cache.h b/cache.h
index 430d021..1692891 100644
--- a/cache.h
+++ b/cache.h
@@ -267,7 +267,7 @@ static inline unsigned int canon_mode(unsigned int mode)
#define cache_entry_size(len) (offsetof(struct cache_entry,name) + (len) + 1)
struct index_state {
- struct cache_entry **cache;
+ const struct cache_entry **cache;
unsigned int version;
unsigned int cache_nr, cache_alloc, cache_changed;
struct string_list *resolve_undo;
will help quickly identify them without bogus warnings.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Unicode clause D14 defines all characters U+nFFFE and U+nFFFF (where
0 <= n <= 10h) as well as the range U+FDD0..U+FDEF as non-characters,
reserved for internal use only. Disallow these characters in commit
messages as they are normally not recommended for interchange.
Signed-off-by: Peter Krefting <peter@softwolves.pp.se>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Instead of using a hand allocated args[] array, use argv-array API
to manage the dynamically created list of arguments when invoking
name-rev.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git name-rev" is supposed to convert given object names into
strings that name the same objects based on refs, that can be fed to
"git rev-parse" to get the same object names back, so the output for
the commit object v1.8.3^0 (i.e. the commit tagged as v1.8.3)
$ git rev-parse v1.8.3 v1.8.3^0 | git name-rev --stdin
8af06057d0c31a24e8737ae846ac2e116e8bafb9
edca415256 (tags/v1.8.3^0)
has to have "^0" at the end, as "edca41" is a commit, not the tag
that references it. But we do not get anything for the tag object
(8af0605) itself.
This is because the command however did not bother to see if the
object is at the tip of some ref, and failed to convert a tag
object.
Teach it to show this instead:
$ git rev-parse v1.8.3 v1.8.3^0 | git name-rev --stdin
8af06057d0c31a24e8737ae846ac2e116e8bafb9 (tags/v1.8.3)
edca415256 (tags/v1.8.3^0)
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since git-pull learned the --rebase option it has not just been about
merging changes from a remote repository (where "merge" is in the sense
of "git merge"). Change the description to use "integrate" instead of
"merge" in order to reflect this.
Signed-off-by: John Keeping <john@keeping.me.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Some shells do not understand the one-line construct, and instead need
FOO=bar &&
export FOO
Detect this in the test-lint target.
Signed-off-by: Thomas Rast <trast@inf.ethz.ch>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The == operator as an alias to = is not POSIX. This doesn't actually
matter for the execution of the script, because it only runs when the
shell is bash. However, it trips up test-lint, so it's nicer to use
the standard form.
Signed-off-by: Thomas Rast <trast@inf.ethz.ch>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When pushing using a matching refspec or a pattern refspec, each ref
in the local repository must be paired with a ref advertised by the
remote server. This is accomplished by using the refspec to transform
the name of the local ref into the name it should have in the remote
repository, and then performing a linear search through the list of
remote refs to see if the remote ref was advertised by the remote
system.
Each of these lookups has O(n) complexity and makes match_push_refs()
be an O(m*n) operation, where m is the number of local refs and n is
the number of remote refs. If there are many refs 100,000+, then this
ref matching can take a significant amount of time. Let's prepare an
index of the remote refs to allow searching in O(log n) time and
reduce the complexity of match_push_refs() to O(m log n).
We prepare the index lazily so that it is only created when necessary.
So, there should be no impact when _not_ using a matching or pattern
refspec, i.e. when pushing using only explicit refspecs.
Dry-run push of a repository with 121,913 local and remote refs:
before after
real 1m40.582s 0m0.804s
user 1m39.914s 0m0.515s
sys 0m0.125s 0m0.106s
The creation of the index has overhead. So, if there are very few
local refs, then it could take longer to create the index than it
would have taken to just perform n linear lookups into the remote
ref space. Using the index should provide some improvement when
the number of local refs is roughly greater than the log of the
number of remote refs (i.e. m >= log n). The pathological case is
when there is a single local ref and very many remote refs.
Dry-run push of a repository with 121,913 remote refs and a single
local ref:
before after
real 0m0.525s 0m0.566s
user 0m0.243s 0m0.279s
sys 0m0.075s 0m0.099s
Using an index takes 41 ms longer, or roughly 7.8% longer.
Jeff King measured a no-op push of a single ref into a remote repo
with 370,000 refs:
before after
real 0m1.087s 0m1.156s
user 0m1.344s 0m1.412s
sys 0m0.288s 0m0.284s
Using an index takes 69 ms longer, or roughly 6.3% longer.
None of the measurements above required transferring any objects to
the remote repository. If the push required transferring objects and
updating the refs in the remote repository, the impact of preparing
the search index would be even smaller.
A similar operation is performed in the reverse direction when pruning
using a matching or pattern refspec. Let's avoid O(m*n) behavior in
the same way by lazily preparing an index on the local refs.
Signed-off-by: Brandon Casey <drafnel@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the current state, a user of git-remote-mediawiki can edit the markup text
locally, but has to push to the remote wiki to see how the page is rendererd.
Add a new 'git mw preview' command that allows rendering the markup text on
the remote wiki without actually pushing any change on the wiki.
This uses Mediawiki's API to render the markup and inserts it in an actual
HTML page from the wiki so that CSS can be rendered properly. Most links
should work when the page exists on the remote.
Signed-off-by: Benoit Person <benoit.person@ensimag.fr>
Signed-off-by: Matthieu Moy <matthieu.moy@grenoble-inp.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
For now, git-remote-mediawiki is only a remote-helper. This patch adds a new
toolset script in which we will be able to build new tools for
git-remote-mediawiki.
This toolset uses a subcommand-mechanism to launch the proper action. For now
only the 'help' subcommand is implemented. It also provides some generic code
for the verbose and help command line options.
Signed-off-by: Benoit Person <benoit.person@ensimag.fr>
Signed-off-by: Matthieu Moy <matthieu.moy@grenoble-inp.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
For now, Git::Mediawiki contains nothing.
This first patch moves some of git-remote-mediawiki.perl's factorisable code
into Git::Mediawiki. In the same time, it removes the side effects of that code
and renames the fucntions and constants moved to expose a better API.
Signed-off-by: Benoit Person <benoit.person@ensimag.fr>
Signed-off-by: Matthieu Moy <matthieu.moy@grenoble-inp.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Until now, if git-remote-mediawiki was not installed, the test suite
copied it to the toplevel directory. This solution pollutes the
directory with untracked files. Plus, we would need to copy the new
git-mw.perl file to test it too.
Signed-off-by: Benoit Person <benoit.person@ensimag.fr>
Signed-off-by: Matthieu Moy <matthieu.moy@grenoble-inp.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The introduction of the Git::Mediawiki package makes it impossible to test,
without installation, git-remote-mediawiki and git-mw.
Using a git bin-wrapper enables us to define proper $GITPERLLIB to force the
use of the developement version of the Git::Mediawiki package, bypassing its
installed version if any.
An alternate solution was to 'install' all the files required at each build
but it pollutes the toplevel with untracked files.
Signed-off-by: Benoit Person <benoit.person@ensimag.fr>
Signed-off-by: Matthieu Moy <matthieu.moy@grenoble-inp.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
For now, bin-wrappers overwrites GITPERLLIB. If we want to chain to
those scripts and define GITPERLLIB before, our changes will be
discarded.
This patch makes the bin-wrappers prepend their modifications to
GITPERLLIB rather than redefining it. It also unset GITPERLLIB in the
test-suite to prevent broken $GITPERLLIB in the user's configuration
from interfering with the testsuite.
The codes using GIT_TEMPLATE_DIR and GIT_TEXTDOMAINDIR handle only one
path in each of this variable so this new behavior would be useless on
those variables.
Signed-off-by: Benoit Person <benoit.person@ensimag.fr>
Signed-off-by: Matthieu Moy <matthieu.moy@grenoble-inp.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We would want to allow the user to preview what he has edited locally
before pushing it out (and thus creating a non-removable revision in
the mediawiki's history).
This patch introduces a new perl package in which we will be able to
share code between that new tool and the remote helper:
git-remote-mediawiki.perl.
A perl package offers the best way to handle such case: Each script
can select what should be imported in its namespace. The package
namespacing limits the use of side effects in the shared code.
An alternate solution is to concatenate a "toolset" file with each
*.perl when 'make'-ing the project. In that scheme, everything is
imported in the script's namespace. Plus, files should be renamed in
order to chain to Git's toplevel makefile. Hence, this solution is not
acceptable.
Signed-off-by: Benoit Person <benoit.person@ensimag.fr>
Signed-off-by: Matthieu Moy <matthieu.moy@grenoble-inp.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The shell syntax "export X=Y A=B" is not understood by all shells.
Signed-off-by: Torsten Bögershausen <tboegi@web.de>
Acked-by: Thomas Rast <trast@inf.ethz.ch>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit 0433ad1 (clone: run check_everything_connected,
2013-03-25) added the same connectivity check to clone that
we use for fetching. The intent was to provide enough safety
checks that "git clone git://..." could be counted on to
detect bit errors and other repo corruption, and not
silently propagate them to the clone.
For local clones, this turns out to be a bad idea, for two
reasons:
1. Local clones use hard linking (or even shared object
stores), and so complete far more quickly. The time
spent on the connectivity check is therefore
proportionally much more painful.
2. Local clones do not actually meet our safety guarantee
anyway. The connectivity check makes sure we have all
of the objects we claim to, but it does not check for
bit errors. We will notice bit errors in commits and
trees, but we do not load blob objects at all. Whereas
over the pack transport, we actually recompute the sha1
of each object in the incoming packfile; bit errors
change the sha1 of the object, which is then caught by
the connectivity check.
This patch drops the connectivity check in the local case.
Note that we have to revert the changes from 0433ad1 to
t5710, as we no longer notice the corruption during clone.
We could go a step further and provide a "verify even local
clones" option, but it is probably not worthwhile. You can
already spell that as "cd foo.git && git fsck && git clone ."
or as "git clone --no-local foo.git".
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
With some workflows, it is more suitable to rebase on top of remote
changes when a push does not fast-forward. Change the advice messages
in git-push to suggest that a user "integrate the remote changes"
instead of "merge the remote changes" to make this slightly clearer.
Also change the suggested 'git pull' to 'git pull ...' to hint to users
that they may want to add other parameters.
Suggested-by: Philip Oakley <philipoakley@iee.org>
Signed-off-by: John Keeping <john@keeping.me.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In order to clarify which value is used when there are multiple values
defined for a key, re-order the list of file locations so that it runs
from least specific to most specific. Then add a paragraph which simply
says that the last value will be used.
Signed-off-by: John Keeping <john@keeping.me.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Using sha1_object_info_extended, a caller can find out the
type of an object, its size, and information about where it
is stored. In addition to the object's "true" size, it can
also be useful to know the size that the object takes on
disk (e.g., to generate statistics about which refs consume
space).
This patch adds a "disk_sizep" field to "struct object_info",
and fills it in during sha1_object_info_extended if it is
non-NULL.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The sha1_object_info_extended function expects the caller to
provide a "struct object_info" which contains pointers to
"query" items that will be filled in. The purpose of
providing pointers rather than storing the response directly
in the struct is so that callers can choose not to incur the
expense in finding particular fields that they do not care
about.
Right now the only query item is "sizep", and all callers
set it explicitly to choose whether or not to query it; they
can then leave the rest of the struct uninitialized.
However, as we add new query items, each caller will have to
be updated to explicitly turn off the new ones (by setting
them to NULL). Instead, let's teach each caller to
zero-initialize the struct, so that they do not have to
learn about each new query item added.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The path of the file to be locked is held in lock_file::filename,
which is a fixed-length buffer of length PATH_MAX. This buffer is
also (temporarily) used to hold the path of the lock file, which is
the path of the file being locked plus ".lock". Because of this, the
path of the file being locked must be less than (PATH_MAX - 5)
characters long (5 chars are needed for ".lock" and one character for
the NUL terminator).
On entry into lock_file(), the path length was only verified to be
less than PATH_MAX characters, not less than (PATH_MAX - 5)
characters.
When and if resolve_symlink() is called, then that function is
correctly told to treat the buffer as (PATH_MAX - 5) characters long.
This part is correct. However:
* If LOCK_NODEREF was specified, then resolve_symlink() is never
called.
* If resolve_symlink() is called but the path is not a symlink, then
the length check is never applied.
So it is possible for a path with length (PATH_MAX - 5 <= len <
PATH_MAX) to make it through the checks. When ".lock" is strcat()ted
to such a path, the lock_file::filename buffer is overflowed.
Fix the problem by adding a check when entering lock_file() that the
original path is less than (PATH_MAX - 5) characters.
[jc: with independent development by Peff]
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Halve the number of callsites of contains() to two using temporary
variables, simplifying the code. While at it, get rid of the
diff_options parameter, which became unused with 8fa4b09f.
Signed-off-by: René Scharfe <rene.scharfe@lsrfire.ath.cx>
Acked-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>