When reading references via `files_read_raw_ref()` we always consult
both the loose reference, and if that wasn't found, we also consult the
packed-refs file. While this makes sense to read a generic reference, it
is wasteful in the case where we only care about symbolic references:
the packed-refs backend does not support them, and thus it cannot ever
return one for us.
Special-case reading of symbolic references for the files backend such
that we always skip asking the packed-refs backend.
We use `refs_read_symbolic_ref()` extensively to determine whether we
need to skip updating local symbolic references during a fetch, which is
why the change results in a significant speedup when doing fetches in
repositories with huge numbers of references. The following benchmark
executes a mirror-fetch in a repository with about 2 million references
via `git fetch --prune --no-write-fetch-head +refs/*:refs/*`:
Benchmark 1: HEAD~
Time (mean ± σ): 68.372 s ± 2.344 s [User: 65.629 s, System: 8.786 s]
Range (min … max): 65.745 s … 70.246 s 3 runs
Benchmark 2: HEAD
Time (mean ± σ): 60.259 s ± 0.343 s [User: 61.019 s, System: 7.245 s]
Range (min … max): 60.003 s … 60.649 s 3 runs
Summary
'HEAD' ran
1.13 ± 0.04 times faster than 'HEAD~'
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We have two cases in the remote code where we check whether a reference
is symbolic or not, but don't mind in case it doesn't exist or in case
it exists but is a non-symbolic reference. Convert these two callsites
to use the new `refs_read_symbolic_ref()` function, whose intent is to
implement exactly that usecase.
No change in behaviour is expected from this change.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Reading of symbolic and non-symbolic references is currently treated the
same in reference backends: we always call `refs_read_raw_ref()` and
then decide based on the returned flags what type it is. This has one
downside though: symbolic references may be treated different from
normal references in a backend from normal references. The packed-refs
backend for example doesn't even know about symbolic references, and as
a result it is pointless to even ask it for one.
There are cases where we really only care about whether a reference is
symbolic or not, but don't care about whether it exists at all or may be
a non-symbolic reference. But it is not possible to optimize for this
case right now, and as a consequence we will always first check for a
loose reference to exist, and if it doesn't, we'll query the packed-refs
backend for a known-to-not-be-symbolic reference. This is inefficient
and requires us to search all packed references even though we know to
not care for the result at all.
Introduce a new function `refs_read_symbolic_ref()` which allows us to
fix this case. This function will only ever return symbolic references
and can thus optimize for the scenario layed out above. By default, if
the backend doesn't provide an implementation for it, we just use the
old code path and fall back to `read_raw_ref()`. But in case the backend
provides its own, more efficient implementation, we will use that one
instead.
Note that this function is explicitly designed to not distinguish
between missing references and non-symbolic references. If it did, we'd
be forced to always search the packed-refs backend to see whether the
symbolic reference the user asked for really doesn't exist, or if it
exists as a non-symbolic reference.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When fetching from a remote repository we will by default write what has
been fetched into the special FETCH_HEAD reference. The order in which
references are written depends on whether the reference is for merge or
not, which, despite some other conditions, is also determined based on
whether the old object ID the reference is being updated from actually
exists in the repository.
To write FETCH_HEAD we thus loop through all references thrice: once for
the references that are about to be merged, once for the references that
are not for merge, and finally for all references that are ignored. For
every iteration, we then look up the old object ID to determine whether
the referenced object exists so that we can label it as "not-for-merge"
if it doesn't exist. It goes without saying that this can be expensive
in case where we are fetching a lot of references.
While this is hard to avoid in the case where we're writing FETCH_HEAD,
users can in fact ask us to skip this work via `--no-write-fetch-head`.
In that case, we do not care for the result of those lookups at all
because we don't have to order writes to FETCH_HEAD in the first place.
Skip this busywork in case we're not writing to FETCH_HEAD. The
following benchmark performs a mirror-fetch in a repository with about
two million references via `git fetch --prune --no-write-fetch-head
+refs/*:refs/*`:
Benchmark 1: HEAD~
Time (mean ± σ): 75.388 s ± 1.942 s [User: 71.103 s, System: 8.953 s]
Range (min … max): 73.184 s … 76.845 s 3 runs
Benchmark 2: HEAD
Time (mean ± σ): 69.486 s ± 1.016 s [User: 65.941 s, System: 8.806 s]
Range (min … max): 68.864 s … 70.659 s 3 runs
Summary
'HEAD' ran
1.08 ± 0.03 times faster than 'HEAD~'
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
During packfile negotiation the client will send "want" and "want-ref"
lines to the server to tell it which objects it is interested in. The
server-side parses each of those and looks them up to see whether it
actually has requested objects. This lookup is performed by calling
`parse_object()` directly, which thus hits the object database. In the
general case though most of the objects the client requests will be
commits. We can thus try to look up the object via the commit-graph
opportunistically, which is much faster than doing the same via the
object database.
Refactor parsing of both "want" and "want-ref" lines to do so.
The following benchmark is executed in a repository with a huge number
of references. It uses cached request from git-fetch(1) as input to
git-upload-pack(1) that contains about 876,000 "want" lines:
Benchmark 1: HEAD~
Time (mean ± σ): 7.113 s ± 0.028 s [User: 6.900 s, System: 0.662 s]
Range (min … max): 7.072 s … 7.168 s 10 runs
Benchmark 2: HEAD
Time (mean ± σ): 6.622 s ± 0.061 s [User: 6.452 s, System: 0.650 s]
Range (min … max): 6.535 s … 6.727 s 10 runs
Summary
'HEAD' ran
1.07 ± 0.01 times faster than 'HEAD~'
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* ps/fetch-atomic:
fetch: make `--atomic` flag cover pruning of refs
fetch: make `--atomic` flag cover backfilling of tags
refs: add interface to iterate over queued transactional updates
fetch: report errors when backfilling tags fails
fetch: control lifecycle of FETCH_HEAD in a single place
fetch: backfill tags before setting upstream
fetch: increase test coverage of fetches
"working tree" and "per-worktree ref" were in glossary, but
"worktree" itself wasn't, which has been corrected.
* jc/glossary-worktree:
glossary: describe "worktree"
"git cmd -h" outside a repository should error out cleanly for many
commands, but instead it hit a BUG(), which has been corrected.
* js/short-help-outside-repo-fix:
t0012: verify that built-ins handle `-h` even without gitdir
checkout/fetch/pull/pack-objects: allow `-h` outside a repository
When there is no object to write .bitmap file for, "git
multi-pack-index" triggered an error, instead of just skipping,
which has been corrected.
* tb/midx-no-bitmap-for-no-objects:
midx: prevent writing a .bitmap without any objects
"git branch" learned the "--recurse-submodules" option.
* gc/branch-recurse-submodules:
branch.c: use 'goto cleanup' in setup_tracking() to fix memory leaks
branch: add --recurse-submodules option for branch creation
builtin/branch: consolidate action-picking logic in cmd_branch()
branch: add a dry_run parameter to create_branch()
branch: make create_branch() always create a branch
branch: move --set-upstream-to behavior to dwim_and_setup_tracking()
Because a deletion of ref would need to remove it from both the
loose ref store and the packed ref store, a delete-ref operation
that logically removes one ref may end up invoking ref-transaction
hook twice, which has been corrected.
* ps/avoid-unnecessary-hook-invocation-with-packed-refs:
refs: skip hooks when deleting uncovered packed refs
refs: do not execute reference-transaction hook on packing refs
refs: demonstrate excessive execution of the reference-transaction hook
refs: allow skipping the reference-transaction hook
refs: allow passing flags when beginning transactions
refs: extract packed_refs_delete_refs() to allow control of transaction
Use an internal call to reset_head() helper function instead of
spawning "git checkout" in "rebase", and update code paths that are
involved in the change.
* pw/use-in-process-checkout-in-rebase:
rebase -m: don't fork git checkout
rebase --apply: set ORIG_HEAD correctly
rebase --apply: fix reflog
reset_head(): take struct rebase_head_opts
rebase: cleanup reset_head() calls
create_autostash(): remove unneeded parameter
reset_head(): make default_reflog_action optional
reset_head(): factor out ref updates
reset_head(): remove action parameter
rebase --apply: don't run post-checkout hook if there is an error
rebase: do not remove untracked files on checkout
rebase: pass correct arguments to post-checkout hook
t5403: refactor rebase post-checkout hook tests
rebase: factor out checkout for up to date branch
"receive-pack" checks if it will do any ref updates (various
conditions could reject a push) before received objects are taken
out of the temporary directory used for quarantine purposes, so
that a push that is known-to-fail will not leave crufts that a
future "gc" needs to clean up.
* cb/clear-quarantine-early-on-all-ref-update-errors:
receive-pack: purge temporary data if no command is ready to run
The command line completion script (in contrib/) learned to
complete all Git subcommands, including the ones that are normally
hidden, when GIT_COMPLETION_SHOW_ALL_COMMANDS is used.
* ab/complete-show-all-commands:
completion: add a GIT_COMPLETION_SHOW_ALL_COMMANDS
completion tests: re-source git-completion.bash in a subshell
Style updates on a test script helper.
* sy/modernize-t-lib-read-tree-m-3way:
t/lib-read-tree-m-3way: indent with tabs
t/lib-read-tree-m-3way: modernize style
"git update-index", "git checkout-index", and "git clean" are
taught to work better with the sparse checkout feature.
* vd/sparse-clean-etc:
update-index: reduce scope of index expansion in do_reupdate
update-index: integrate with sparse index
update-index: add tests for sparse-checkout compatibility
checkout-index: integrate with sparse index
checkout-index: add --ignore-skip-worktree-bits option
checkout-index: expand sparse checkout compatibility tests
clean: integrate with sparse index
reset: reorder wildcard pathspec conditions
reset: fix validation in sparse index test
"git log" and friends learned an option --exclude-first-parent-only
to propagate UNINTERESTING bit down only along the first-parent
chain, just like --first-parent option shows commits that lack the
UNINTERESTING bit only along the first-parent chain.
* jz/rev-list-exclude-first-parent-only:
git-rev-list: add --exclude-first-parent-only flag
Unlike "git apply", "git patch-id" did not handle patches with
hunks that has only 1 line in either preimage or postimage, which
has been corrected.
* jz/patch-id-hunk-header-parsing-fix:
patch-id: fix scan_hunk_header on diffs with 1 line of before/after
patch-id: fix antipatterns in tests
Prepare more test scripts for the introduction of reftable.
* hn/reftable-tests:
t5312: prepare for reftable
t1405: mark test that checks existence as REFFILES
t1405: explictly delete reflogs for reftable
When "git subtree" wants to create a merge, it used "git merge" and
let it be affected by end-user's "merge.ff" configuration, which
has been corrected.
* tk/subtree-merge-not-ff-only:
subtree: force merge commit
When fetching with the `--prune` flag we will delete any local
references matching the fetch refspec which have disappeared on the
remote. This step is not currently covered by the `--atomic` flag: we
delete branches even though updating of local references has failed,
which means that the fetch is not an all-or-nothing operation.
Fix this bug by passing in the global transaction into `prune_refs()`:
if one is given, then we'll only queue up deletions and not commit them
right away.
This change also improves performance when pruning many branches in a
repository with a big packed-refs file: every references is pruned in
its own transaction, which means that we potentially have to rewrite
the packed-refs files for every single reference we're about to prune.
The following benchmark demonstrates this: it performs a pruning fetch
from a repository with a single reference into a repository with 100k
references, which causes us to prune all but one reference. This is of
course a very artificial setup, but serves to demonstrate the impact of
only having to write the packed-refs file once:
Benchmark 1: git fetch --prune --atomic +refs/*:refs/* (HEAD~)
Time (mean ± σ): 2.366 s ± 0.021 s [User: 0.858 s, System: 1.508 s]
Range (min … max): 2.328 s … 2.407 s 10 runs
Benchmark 2: git fetch --prune --atomic +refs/*:refs/* (HEAD)
Time (mean ± σ): 1.369 s ± 0.017 s [User: 0.715 s, System: 0.641 s]
Range (min … max): 1.346 s … 1.400 s 10 runs
Summary
'git fetch --prune --atomic +refs/*:refs/* (HEAD)' ran
1.73 ± 0.03 times faster than 'git fetch --prune --atomic +refs/*:refs/* (HEAD~)'
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When fetching references from a remote we by default also fetch all tags
which point into the history we have fetched. This is a separate step
performed after updating local references because it requires us to walk
over the history on the client-side to determine whether the remote has
announced any tags which point to one of the fetched commits.
This backfilling of tags isn't covered by the `--atomic` flag: right
now, it only applies to the step where we update our local references.
This is an oversight at the time the flag was introduced: its purpose is
to either update all references or none, but right now we happily update
local references even in the case where backfilling failed.
Fix this by pulling up creation of the reference transaction such that
we can pass the same transaction to both the code which updates local
references and to the code which backfills tags. This allows us to only
commit the transaction in case both actions succeed.
Note that we also have to start passing the transaction into
`find_non_local_tags()`: this function is responsible for finding all
tags which we need to backfill. Right now, it will happily return tags
which have already been updated with our local references. But when we
use a single transaction for both local references and backfilling then
it may happen that we try to queue the same reference update twice to
the transaction, which consequently triggers a bug. We thus have to skip
over any tags which have already been queued.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
There is no way for a caller to see whether a reference update has
already been queued up for a given reference transaction. There are
multiple alternatives to provide this functionality:
- We may add a function that simply tells us whether a specific
reference has already been queued. If implemented naively then
this would potentially be quadratic in runtime behaviour if this
question is asked repeatedly because we have to iterate over all
references every time. The alternative would be to add a hashmap
of all queued reference updates to speed up the lookup, but this
adds overhead to all callers.
- We may add a flag to `ref_transaction_add_update()` that causes it
to skip duplicates, but this has the same runtime concerns as the
first alternative.
- We may add an interface which lets callers collect all updates
which have already been queued such that he can avoid re-adding
them. This is the most flexible approach and puts the burden on
the caller, but also allows us to not impact any of the existing
callsites which don't need this information.
This commit implements the last approach: it allows us to compute the
map of already-queued updates once up front such that we can then skip
all subsequent references which are already part of this map.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When the backfilling of tags fails we do not report this error to the
caller, but only report it implicitly at a later point when reporting
updated references. This leaves callers unable to act upon the
information of whether the backfilling succeeded or not.
Refactor the function to return an error code and pass it up the
callstack. This causes us to correctly propagate the error back to the
user of git-fetch(1).
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
There are two different locations where we're appending to FETCH_HEAD:
first when storing updated references, and second when backfilling tags.
Both times we open the file, append to it and then commit it into place,
which is essentially duplicate work.
Improve the lifecycle of updating FETCH_HEAD by opening and committing
it once in `do_fetch()`, where we pass the structure down to the code
which wants to append to it.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The fetch code flow is a bit hard to understand right now:
1. We optionally prune all references which have vanished on the
remote side.
2. We fetch and update all other references locally.
3. We update the upstream branch in the gitconfig.
4. We backfill tags pointing into the history we have just fetched.
It is quite confusing that we fetch objects and update references in
both (2) and (4), which is further stressed by the point that we use a
`skip` goto label to jump from (3) to (4) in case we fail to update the
gitconfig as expected.
Reorder the code to first update all local references, and only after we
have done so update the upstream branch information. This improves the
code flow and furthermore makes it easier to refactor the way we update
references together.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When using git-fetch(1) with the `--atomic` flag the expectation is that
either all of the references are updated, or alternatively none are in
case the fetch fails. While we already have tests for this, we do not
have any tests which exercise atomicity either when pruning deleted refs
or when backfilling tags. This gap in test coverage hides that we indeed
don't handle atomicity correctly for both of these cases.
Add test cases which cover these testing gaps to demonstrate the broken
behaviour. Note that tests are not marked as `test_expect_failure`: this
is done to explicitly demonstrate the current known-wrong behaviour, and
they will be fixed up as soon as we fix the underlying bugs.
While at it this commit also adds another test case which demonstrates
that backfilling of tags does not return an error code in case the
backfill fails. This bug will also be fixed by a subsequent commit.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Removal of unused code and doc.
* js/no-more-legacy-stash:
stash: stop warning about the obsolete `stash.useBuiltin` config setting
stash: remove documentation for `stash.useBuiltin`
add: remove support for `git-legacy-stash`
git-sh-setup: remove remnant bits referring to `git-legacy-stash`
"git diff --diff-filter=aR" is now parsed correctly.
* js/diff-filter-negation-fix:
diff-filter: be more careful when looking for negative bits
diff.c: move the diff filter bits definitions up a bit
docs(diff): lose incorrect claim about `diff-files --diff-filter=A`
Interaction between fetch.negotiationAlgorithm and
feature.experimental configuration variables has been corrected.
* en/fetch-negotiation-default-fix:
repo-settings: rename the traditional default fetch.negotiationAlgorithm
repo-settings: fix error handling for unknown values
repo-settings: fix checking for fetch.negotiationAlgorithm=default
The build procedure has been taught to notice older version of zlib
and enable our replacement uncompress2() automatically.
* ab/auto-detect-zlib-compress2:
compat: auto-detect if zlib has uncompress2()
A bug that made multi-pack bitmap and the object order out-of-sync,
making the .midx data corrupt, has been fixed.
* tb/midx-bitmap-corruption-fix:
pack-bitmap.c: gracefully fallback after opening pack/MIDX
midx: read `RIDX` chunk when present
t/lib-bitmap.sh: parameterize tests over reverse index source
t5326: move tests to t/lib-bitmap.sh
t5326: extract `test_rev_exists`
t5326: drop unnecessary setup
pack-revindex.c: instrument loading on-disk reverse index
midx.c: make changing the preferred pack safe
t5326: demonstrate bitmap corruption after permutation
"git log --remerge-diff" shows the difference from mechanical merge
result and the result that is actually recorded in a merge commit.
* en/remerge-diff:
diff-merges: avoid history simplifications when diffing merges
merge-ort: mark conflict/warning messages from inner merges as omittable
show, log: include conflict/warning messages in --remerge-diff headers
diff: add ability to insert additional headers for paths
merge-ort: format messages slightly different for use in headers
merge-ort: mark a few more conflict messages as omittable
merge-ort: capture and print ll-merge warnings in our preferred fashion
ll-merge: make callers responsible for showing warnings
log: clean unneeded objects during `log --remerge-diff`
show, log: provide a --remerge-diff capability
Problems identified by Coverity in the reftable code have been
corrected.
* hn/reftable-coverity-fixes:
reftable: add print functions to the record types
reftable: make reftable_record a tagged union
reftable: remove outdated file reftable.c
reftable: implement record equality generically
reftable: make reftable-record.h function signatures const correct
reftable: handle null refnames in reftable_ref_record_equal
reftable: drop stray printf in readwrite_test
reftable: order unittests by complexity
reftable: all xxx_free() functions accept NULL arguments
reftable: fix resource warning
reftable: ignore remove() return value in stack_test.c
reftable: check reftable_stack_auto_compact() return value
reftable: fix resource leak blocksource.c
reftable: fix resource leak in block.c error path
reftable: fix OOB stack write in print functions
The command line completion (in contrib/) learns to complete
arguments to give to "git sparse-checkout" command.
* ld/sparse-index-bash-completion:
completion: handle unusual characters for sparse-checkout
completion: improve sparse-checkout cone mode directory completion
completion: address sparse-checkout issues
When "git fetch --prune" failed to prune the refs it wanted to
prune, the command issued error messages but exited with exit
status 0, which has been corrected.
* tg/fetch-prune-exit-code-fix:
fetch --prune: exit with error if pruning fails