While reviewing some dts diffs recently I noticed that the hunk header
logic was failing to find the containing node. This is because the regex
doesn't consider properties that may span multiple lines, i.e.
property = <something>,
<something_else>;
and it got hung up on comments inside nodes that look like the root node
because they start with '/*'. Add tests for these cases and update the
regex to find them. Maybe detecting the root node is too complicated but
forcing it to be a backslash with any amount of whitespace up to an open
bracket seemed OK. I tried to detect that a comment is in-between the
two parts but I wasn't happy so I just dropped it.
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Frank Rowand <frowand.list@gmail.com>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
Reviewed-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
While parsing the parents of a commit, if we are able to parse an actual
oid but lookup_commit() fails on it (because we previously saw it in
this process as a different object type), we silently omit the parent
and do not report any error to the caller.
The caller has no way of knowing this happened, because even an empty
parent list is a valid parse result. As a result, it's possible to fool
our "rev-list" connectivity check into accepting a corrupted set of
objects.
There's a test for this case already in t6102, but unfortunately it has
a slight error. It creates a broken commit with a parent line pointing
to a blob, and then checks that rev-list notices the problem in two
cases:
1. the "lone" case: we traverse the broken commit by itself (here we
try to actually load the blob from disk and find out that it's not
a commit)
2. the "seen" case: we parse the blob earlier in the process, and then
when calling lookup_commit() we realize immediately that it's not a
commit
The "seen" variant for this test mistakenly parsed another commit
instead of the blob, meaning that we were actually just testing the
"lone" case again. Changing that reveals the breakage (and shows that
this fixes it).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In 't0500-progress-display.sh' all tests running 'test-tool progress
--total=<N>' fail on big-endian systems, e.g. like this:
+ test-tool progress --total=3 Working hard
[...]
+ test_i18ncmp expect out
--- expect 2019-10-18 23:07:54.765523916 +0000
+++ out 2019-10-18 23:07:54.773523916 +0000
@@ -1,4 +1,2 @@
-Working hard: 33% (1/3)<CR>
-Working hard: 66% (2/3)<CR>
-Working hard: 100% (3/3)<CR>
-Working hard: 100% (3/3), done.
+Working hard: 0% (1/12884901888)<CR>
+Working hard: 0% (3/12884901888), done.
The reason for that bogus value is that '--total's parameter is parsed
via parse-options's OPT_INTEGER into a uint64_t variable [1], so the
two bits of 3 end up in the "wrong" bytes on big-endian systems
(12884901888 = 0x300000000).
Change the type of that variable from uint64_t to int, to match what
parse-options expects; in the tests of the progress output we won't
use values that don't fit into an int anyway.
[1] start_progress() expects the total number as an uint64_t, that's
why I chose the same type when declaring the variable holding the
value given on the command line.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
[jpag: Debian unstable/ppc64 (big-endian)]
Tested-By: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
[tz: Fedora s390x (big-endian)]
Tested-By: Todd Zullinger <tmz@pobox.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git stash save" lost local changes to submodules, which has been
corrected.
* jj/stash-reset-only-toplevel:
stash: avoid recursive hard reset on submodules
"git format-patch -o <outdir>" did an equivalent of "mkdir <outdir>"
not "mkdir -p <outdir>", which is being corrected.
* bw/format-patch-o-create-leading-dirs:
format-patch: create leading components of output directory
The builtin/notes.c::copy() function is prepared to handle either
one or two arguments given from the command line; when one argument
is given, to-obj defaults to HEAD.
bbb1b8a3 ("notes: check number of parameters to "git notes copy"",
2010-06-28) tried to make sure "git notes copy" (with *no* other
argument) does not dereference NULL by checking the number of
parameters, but it incorrectly insisted that we need two arguments,
instead of either one or two. This disabled the defaulting to-obj
to HEAD.
Correct it.
Signed-off-by: Doan Tran Cong Danh <congdanhqx@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit bbb1b8a35a ("notes: check number of parameters to "git notes
copy"", 2010-06-28) added a test for too many or too few of
parameters provided to `git notes copy'.
However, the test only ensures that the command will fail but it
doesn't really check if it fails because of number of parameters.
If we accidentally lifted the check inside our code base, the test
may still have failed because the provided parameter is not a valid
ref.
Correct it.
Signed-off-by: Doan Tran Cong Danh <congdanhqx@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When pushing more than one reference with the --atomic option, the
server is supposed to perform a single atomic transaction to update the
references, leaving them either all to succeed or all to fail. This
works fine when pushing locally or over SSH, but when pushing over HTTP,
we fail to pass the atomic capability to the remote side. In fact, we
have not reported this capability to any remote helpers during the life
of the feature.
Now normally, things happen to work nevertheless, since we actually
check for most types of failures, such as non-fast-forward updates, on
the client side, and just abort the entire attempt. However, if the
server side reports a problem, such as the inability to lock a ref, the
transaction isn't atomic, because we haven't passed the appropriate
capability over and the remote side has no way of knowing that we wanted
atomic behavior.
Fix this by passing the option from the transport code through to remote
helpers, and from the HTTP remote helper down to send-pack. With this
change, we can detect if the server side rejects the push and report
back appropriately. Note the difference in the messages: the remote
side reports "atomic transaction failed", while our own checking rejects
pushes with the message "atomic push failed".
Document the atomic option in the remote helper documentation, so other
implementers can implement it if they like.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In 04005834ed ("log: fix coloring of certain octopus merge shapes",
2018-09-01) there is a fix for the coloring of dashes following an
octopus merge. It makes a distinction between the case where all parents
introduce a new column, versus the case where the first parent collapses
into an existing column:
| *-. | *-.
| |\ \ | |\ \
| | | | |/ / /
The latter case means that the columns for the merge parents begin one
place to the left in the `new_columns` array compared to the former
case.
However, the implementation only works if the commit's parents are kept
in order as they map onto the visual columns, as we get the colors by
iterating over `new_columns` as we print the dashes. In general, the
commit's parents can arbitrarily merge with existing columns, and change
their ordering in the process.
For example, in the following diagram, the number of each column
indicates which commit parent appears in each column.
| | *---.
| | |\ \ \
| | |/ / /
| |/| | /
| |_|_|/
|/| | |
3 1 0 2
If the columns are colored (red, green, yellow, blue), then the dashes
will currently be colored yellow and blue, whereas they should be blue
and red.
To fix this, we need to look up each column in the `mapping` array,
which before the `GRAPH_COLLAPSING` state indicates which logical column
is displayed in each visual column. This implementation is simpler as it
doesn't have any edge cases, and it also handles how left-skewed first
parents are now displayed:
| *-.
|/|\ \
| | | |
0 1 2 3
The color of the first dashes is always the color found in `mapping` two
columns to the right of the commit symbol. Because commits are displayed
after all edges have been collapsed together and the visual columns
match the logical ones, we can find the visual offset of the commit
symbol using `commit_index`.
Signed-off-by: James Coglan <jcoglan@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When a merge commit is printed and its final parent is the same commit
that occupies the column to the right of the merge, this results in a
kink in the displayed edges:
* |
|\ \
| |/
| *
Graphs containing these shapes can be hard to read, as the expansion to
the right followed immediately by collapsing back to the left creates a
lot of zig-zagging edges, especially when many columns are present.
We can improve this by eliminating the zig-zag and having the merge's
final parent edge fuse immediately with its neighbor:
* |
|\|
| *
This reduces the horizontal width for the current commit by 2, and
requires one less row, making the graph display more compact. Taken in
combination with other graph-smoothing enhancements, it greatly
compresses the space needed to display certain histories:
*
|\
| * *
| |\ |\
| | * | *
| | | | |\
| | \ | | *
| *-. \ | * |
| |\ \ \ => |/|\|
|/ / / / | | *
| | | / | * |
| | |/ | |/
| | * * /
| * | |/
| |/ *
* |
|/
*
One of the test cases here cannot be correctly rendered in Git v2.23.0;
it produces this output following commit E:
| | *-. \ 5_E
| | |\ \ \
| |/ / / /
| | | / _
| |_|/
|/| |
The new implementation makes sure that the rightmost edge in this
history is not left dangling as above.
Signed-off-by: James Coglan <jcoglan@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When a graph contains edges that are in the process of collapsing to the
left, but those edges cross a commit line, the effect is that the edges
have a jagged appearance:
*
|\
| *
| \
*-. \
|\ \ \
| | * |
| * | |
| |/ /
* | |
|/ /
* |
|/
*
We already takes steps to smooth edges like this when they're expanding;
when an edge appears to the right of a merge commit marker on a
GRAPH_COMMIT line immediately following a GRAPH_POST_MERGE line, we
render it as a `\`:
* \
|\ \
| * \
| |\ \
We can make a similar improvement to collapsing edges, making them
easier to follow and giving the overall graph a feeling of increased
symmetry:
*
|\
| *
| \
*-. \
|\ \ \
| | * |
| * | |
| |/ /
* / /
|/ /
* /
|/
*
To do this, we introduce a new special case for edges on GRAPH_COMMIT
lines that immediately follow a GRAPH_COLLAPSING line. By retaining a
copy of the `mapping` array used to render the GRAPH_COLLAPSING line in
the `old_mapping` array, we can determine that an edge is collapsing
through the GRAPH_COMMIT line and should be smoothed.
Signed-off-by: James Coglan <jcoglan@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Following the introduction of "left-skewed" merges, which are merges
whose first parent fuses with another edge to its left, we have some
more edge cases to deal with in the display of commit and post-merge
lines.
The current graph code handles the following cases for edges appearing
to the right of the commit (*) on commit lines. A 2-way merge is usually
followed by vertical lines:
| | |
| * |
| |\ \
An octopus merge (more than two parents) is always followed by edges
sloping to the right:
| | \ | | \
| *-. \ | *---. \
| |\ \ \ | |\ \ \ \
A 2-way merge is followed by a right-sloping edge if the commit line
immediately follows a post-merge line for a commit that appears in the
same column as the current commit, or any column to the left of that:
| * | * |
| |\ | |\ \
| * \ | | * \
| |\ \ | | |\ \
This commit introduces the following new cases for commit lines. If a
2-way merge skews to the left, then the edges to its right are always
vertical lines, even if the commit follows a post-merge line:
| | | | |\
| * | | * |
|/| | |/| |
A commit with 3 parents that skews left is followed by vertical edges:
| | |
| * |
|/|\ \
If a 3-way left-skewed merge commit appears immediately after a
post-merge line, then it may be followed the right-sloping edges, just
like a 2-way merge that is not skewed.
| |\
| * \
|/|\ \
Octopus merges with 4 or more parents that skew to the left will always
be followed by right-sloping edges, because the existing columns need to
expand around the merge.
| | \
| *-. \
|/|\ \ \
On post-merge lines, usually all edges following the current commit
slope to the right:
| * | |
| |\ \ \
However, if the commit is a left-skewed 2-way merge, the edges to its
right remain vertical. We also need to display a space after the
vertical line descending from the commit marker, whereas this line would
normally be followed by a backslash.
| * | |
|/| | |
If a left-skewed merge has more than 2 parents, then the edges to its
right are still sloped as they bend around the edges introduced by the
merge.
| * | |
|/|\ \ \
To handle these new cases, we need to know not just how many parents
each commit has, but how many new columns it adds to the display; this
quantity is recorded in the `edges_added` field for the current commit,
and `prev_edges_added` field for the previous commit.
Here, "column" refers to visual columns, not the logical columns of the
`columns` array. This is because even if all the commit's parents end up
fusing with existing edges, they initially introduce distinct edges in
the commit and post-merge lines before those edges collapse. For
example, a 3-way merge whose 2nd and 3rd parents fuse with existing
edges still introduces 2 visual columns that affect the display of edges
to their right.
| | | \
| | *-. \
| | |\ \ \
| |_|/ / /
|/| | / /
| | |/ /
| |/| |
| | | |
This merge does not introduce any *logical* columns; there are 4 edges
before and after this commit once all edges have collapsed. But it does
initially introduce 2 new edges that need to be accommodated by the
edges to their right.
Signed-off-by: James Coglan <jcoglan@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Currently, when we display a merge whose first parent is already present
in a column to the left of the merge commit, we display the first parent
as a vertical pipe `|` in the GRAPH_POST_MERGE line and then immediately
enter the GRAPH_COLLAPSING state. The first-parent line tracks to the
left and all the other parent lines follow it; this creates a "kink" in
those lines:
| *---.
| |\ \ \
|/ / / /
| | | *
This change tidies the display of such commits such that if the first
parent appears to the left of the merge, we render it as a `/` and the
second parent as a `|`. This reduces the horizontal and vertical space
needed to render the merge, and makes the resulting lines easier to
read.
| *-.
|/|\ \
| | | *
If the first parent is separated from the merge by several columns, a
horizontal line is drawn in a similar manner to how the GRAPH_COLLAPSING
state displays the line.
| | | *-.
| |_|/|\ \
|/| | | | *
This effect is applied to both "normal" two-parent merges, and to
octopus merges. It also reduces the vertical space needed for pre-commit
lines, as the merge occupies one less column than usual.
Before: After:
| * | *
| |\ | |\
| | \ | * \
| | \ |/|\ \
| *-. \
| |\ \ \
Signed-off-by: James Coglan <jcoglan@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The commits following this one introduce a series of improvements to the
layout of graphs, tidying up a few edge cases, namely:
- merge whose first parent fuses with an existing column to the left
- merge whose last parent fuses with its immediate neighbor on the right
- edges that collapse to the left above and below a commit line
This test case exemplifies these cases and provides a motivating example
of the kind of history I'm aiming to clear up.
The first parent of merge E is the same as the parent of H, so those
edges fuse together.
* H
|
| *-. E
| |\ \
|/ / /
|
* B
We can "skew" the display of this merge so that it doesn't introduce
additional columns that immediately collapse:
* H
|
| * E
|/|\
|
* B
The last parent of E is D, the same as the parent of F which is the edge
to the right of the merge.
* F
|
\
*-. \ E
|\ \ \
/ / / /
| /
|/
* D
The two edges leading to D could be fused sooner: rather than expanding
the F edge around the merge and then letting the edges collapse, the F
edge could fuse with the E edge in the post-merge line:
* F
|
\
*-. | E
|\ \|
/ / /
|
* D
If this is combined with the "skew" effect above, we get a much cleaner
graph display for these edges:
* F
|
* | E
/|\|
|
* D
Finally, the edge leading from C to A appears jagged as it passes
through the commit line for B:
| * | C
| |/
* | B
|/
* A
This can be smoothed out so that such edges are easier to read:
| * | C
| |/
* / B
|/
* A
Signed-off-by: James Coglan <jcoglan@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The specification of promisor packfiles (in partial-clone.txt) states
that the .promisor files that accompany packfiles do not matter (just
like .keep files), so whenever a packfile is fetched from the promisor
remote, Git has been writing empty .promisor files. But these files
could contain more useful information.
So instead of writing empty files, write the refs fetched to these
files. This makes it easier to debug issues with partial clones, as we
can identify what refs (and their associated hashes) were fetched at the
time the packfile was downloaded, and if necessary, compare those hashes
against what the promisor remote reports now.
This is implemented by teaching fetch-pack to write its own non-empty
.promisor file whenever it knows the name of the pack's lockfile. This
covers the case wherein the user runs "git fetch" with an internal
protocol or HTTP protocol v2 (fetch_refs_via_pack() in transport.c sets
lock_pack) and with HTTP protocol v0/v1 (fetch_git() in remote-curl.c
passes "--lock-pack" to "fetch-pack").
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Acked-by: Josh Steadmon <steadmon@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Prior to commit 356ee4659b ("sequencer: try to commit without forking
'git commit'", 2017-11-24) the sequencer would always run the
post-commit hook after each pick or revert as it forked `git commit` to
create the commit. The conversion to committing without forking `git
commit` omitted to call the post-commit hook after creating the commit.
Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Some tests were calling set_fake_editor to ensure they had a sane no-op
editor set. Now that all the editor setting is done in subshells these
tests can rely on EDITOR=: and so do not need to call set_fake_editor.
Also add a test at the end to detect any future additions messing with
the exported value of $EDITOR.
Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
As $EDITOR is exported setting it in one test affects all subsequent
tests. Avoid this by always setting it in a subshell. This commit leaves
20 calls to set_fake_editor that are not in subshells as they can
safely be removed in the next commit once all the other editor setting
is done inside subshells.
I have moved the call to set_fake_editor in some tests so it comes
immediately before the call to 'git rebase' to avoid moving unrelated
commands into the subshell. In one case ('rebase -ix with
--autosquash') the call to set_fake_editor is moved past an invocation
of 'git rebase'. This is safe as that invocation of 'git rebase'
requires EDITOR=: or EDITOR=fake-editor.sh without FAKE_LINES being
set which will be the case as the preceding tests either set their
editor in a subshell or call set_fake_editor without setting FAKE_LINES.
In a one test ('auto-amend only edited commits after "edit"') a call
to test_tick are now in a subshell. I think this is OK as it is there
to set the date for the next commit which is executed in the same
subshell rather than updating GIT_COMMITTER_DATE for later tests (the
next test calls test_tick before doing anything else).
Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Neither of the commands executed in the subshell change any shell
variables or the current directory so there is no need for them to be
executed in a subshell.
Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Before, when format-patch generated a cover letter, only the body would
be populated with a branch's description while the subject would be
populated with placeholder text. However, users may want to have the
subject of their cover letter automatically populated in the same way.
Teach format-patch to accept the `--cover-from-description` option and
corresponding `format.coverFromDescription` config, allowing users to
populate different parts of the cover letter (including the subject
now).
Signed-off-by: Denton Liu <liu.denton@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git range-diff" failed to handle mode-only change, which has been
corrected.
* tg/range-diff-output-update:
range-diff: don't segfault with mode-only changes
Pretty-printed command line formatter (used in e.g. reporting the
command being run by the tracing API) had a bug that lost an
argument that is an empty string, which has been corrected.
* gs/sq-quote-buf-pretty:
sq_quote_buf_pretty: don't drop empty arguments
The trace2 output, when sending them to files in a designated
directory, can populate the directory with too many files; a
mechanism is introduced to set the maximum number of files and
discard further logs when the maximum is reached.
* js/trace2-cap-max-output-files:
trace2: write discard message to sentinel files
trace2: discard new traces if target directory has too many files
docs: clarify trace2 version invariants
docs: mention trace2 target-dir mode in git-config
"git log --graph" for an octopus merge is sometimes colored
incorrectly, which is demonstrated and documented but not yet
fixed.
* dl/octopus-graph-bug:
t4214: demonstrate octopus graph coloring failure
t4214: explicitly list tags in log
t4214: generate expect in their own test cases
t4214: use test_merge
test-lib: let test_merge() perform octopus merges
Updates to fast-import/export.
* en/fast-imexport-nested-tags:
fast-export: handle nested tags
t9350: add tests for tags of things other than a commit
fast-export: allow user to request tags be marked with --mark-tags
fast-export: add support for --import-marks-if-exists
fast-import: add support for new 'alias' command
fast-import: allow tags to be identified by mark labels
fast-import: fix handling of deleted tags
fast-export: fix exporting a tag and nothing else
CI updates.
* js/azure-pipelines-msvc:
ci: also build and test with MS Visual Studio on Azure Pipelines
ci: really use shallow clones on Azure Pipelines
tests: let --immediate and --write-junit-xml play well together
test-tool run-command: learn to run (parts of) the testsuite
vcxproj: include more generated files
vcxproj: only copy `git-remote-http.exe` once it was built
msvc: work around a bug in GetEnvironmentVariable()
msvc: handle DEVELOPER=1
msvc: ignore some libraries when linking
compat/win32/path-utils.h: add #include guards
winansi: use FLEX_ARRAY to avoid compiler warning
msvc: avoid using minus operator on unsigned types
push: do not pretend to return `int` from `die_push_simple()`
"git fetch --jobs=<n>" allowed <n> parallel jobs when fetching
submodules, but this did not apply to "git fetch --multiple" that
fetches from multiple remote repositories. It now does.
* js/fetch-jobs:
fetch: let --jobs=<n> parallelize --multiple, too
The merge-recursive machiery is one of the most complex parts of
the system that accumulated cruft over time. This large series
cleans up the implementation quite a bit.
* en/merge-recursive-cleanup: (26 commits)
merge-recursive: fix the fix to the diff3 common ancestor label
merge-recursive: fix the diff3 common ancestor label for virtual commits
merge-recursive: alphabetize include list
merge-recursive: add sanity checks for relevant merge_options
merge-recursive: rename MERGE_RECURSIVE_* to MERGE_VARIANT_*
merge-recursive: split internal fields into a separate struct
merge-recursive: avoid losing output and leaking memory holding that output
merge-recursive: comment and reorder the merge_options fields
merge-recursive: consolidate unnecessary fields in merge_options
merge-recursive: move some definitions around to clean up the header
merge-recursive: rename merge_options argument to opt in header
merge-recursive: rename 'mrtree' to 'result_tree', for clarity
merge-recursive: use common name for ancestors/common/base_list
merge-recursive: fix some overly long lines
cache-tree: share code between functions writing an index as a tree
merge-recursive: don't force external callers to do our logging
merge-recursive: remove useless parameter in merge_trees()
merge-recursive: exit early if index != head
Ensure index matches head before invoking merge machinery, round N
merge-recursive: remove another implicit dependency on the_repository
...
git stash push does not recursively stash submodules, but if
submodule.recurse is set, it may recursively reset --hard them. Having
only the destructive action recurse is likely to be surprising
behaviour, and unlikely to be desirable, so the easiest fix should be to
ensure that the call to git reset --hard never recurses into submodules.
This matches the behavior of check_changes_tracked_files, which ignores
submodules.
Signed-off-by: Jakob Jarmar <jakob@jarmar.se>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
'git format-patch -o <outdir>' did an equivalent of 'mkdir <outdir>'
not 'mkdir -p <outdir>', which is being corrected.
Avoid the usage of 'adjust_shared_perm' on the leading directories which
may have security implications. Achieved by temporarily disabling of
'config.sharedRepository' like 'git init' does.
Signed-off-by: Bert Wesarg <bert.wesarg@googlemail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
While doing some testing with fsmonitor enabled I found
that git commands would segfault after staging and
unstaging an untracked file. Looking at the crash it
appeared that fsmonitor_ewah_callback was attempting to
adjust bits beyond the bounds of the index cache.
Digging into how this could happen it became clear that
the fsmonitor extension must have been written with
more bits than there were entries in the index. The
root cause ended up being that fill_fsmonitor_bitmap was
populating fsmonitor_dirty with bits for all entries in
the index, even those that had been marked for removal.
To solve this problem fill_fsmonitor_bitmap has been
updated to skip entries with the the CE_REMOVE flag set.
With this change the bits written for the fsmonitor
extension will be consistent with the index entries
written by do_write_index. Additionally, BUG checks
have been added to detect if the number of bits in
fsmonitor_dirty should ever exceed the number of
entries in the index again.
Another option that was considered was moving the call
to fill_fsmonitor_bitmap closer to where the index is
written (and where the fsmonitor extension itself is
written). However, that did not work as the
fsmonitor_dirty bitmap must be filled before the index
is split during writing.
Signed-off-by: William Baker <William.Baker@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Change test 'find value_list for a key from a configset' to redirect the
result to 'expect' instead of 'except' which was a typo.
With this change, the test case actually fails because it uses
`configset_get_value`. Clearly, this was intended to be
`configset_get_value_multi` since the test expects a list of values
instead of a single value, so let's fix that, too.
Signed-off-by: Tanay Abhra <tanayabh@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git add -i" has been taught to show the total number of hunks and
the hunks that has been processed so far when showing prompts.
* kt/add-i-progress:
add -i: show progress counter in the prompt
"git stash apply" in a subdirectory of a secondary worktree failed
to access the worktree correctly, which has been corrected.
* js/stash-apply-in-secondary-worktree:
stash apply: report status correctly even in a worktree's subdirectory
"git range-diff" segfaulted when diff.noprefix configuration was
used, as it blindly expected the patch it internally generates to
have the standard a/ and b/ prefixes. The command now forces the
internal patch to be built without any prefix, not to be affected
by any end-user configuration.
* js/range-diff-noprefix:
range-diff: internally force `diff.noprefix=true`
A few simplification and bugfixes to PCRE interface.
* ab/pcre-jit-fixes:
grep: under --debug, show whether PCRE JIT is enabled
grep: do not enter PCRE2_UTF mode on fixed matching
grep: stess test PCRE v2 on invalid UTF-8 data
grep: create a "is_fixed" member in "grep_pat"
grep: consistently use "p->fixed" in compile_regexp()
grep: stop using a custom JIT stack with PCRE v1
grep: stop "using" a custom JIT stack with PCRE v2
grep: remove overly paranoid BUG(...) code
grep: use PCRE v2 for optimized fixed-string search
grep: remove the kwset optimization
grep: drop support for \0 in --fixed-strings <pattern>
grep: make the behavior for NUL-byte in patterns sane
grep tests: move binary pattern tests into their own file
grep tests: move "grep binary" alongside the rest
grep: inline the return value of a function call used only once
t4210: skip more command-line encoding tests on MinGW
grep: don't use PCRE2?_UTF8 with "log --encoding=<non-utf8>"
log tests: test regex backends in "--encode=<enc>" tests
"git rebase -i" showed a wrong HEAD while "reword" open the editor.
* pw/rebase-i-show-HEAD-to-reword:
sequencer: simplify root commit creation
rebase -i: check for updated todo after squash and reword
rebase -i: always update HEAD before rewording
"git clean" fixes.
* en/clean-nested-with-ignored:
dir: special case check for the possibility that pathspec is NULL
clean: fix theoretical path corruption
clean: rewrap overly long line
clean: avoid removing untracked files in a nested git repository
clean: disambiguate the definition of -d
git-clean.txt: do not claim we will delete files with -n/--dry-run
dir: add commentary explaining match_pathspec_item's return value
dir: if our pathspec might match files under a dir, recurse into it
dir: make the DO_MATCH_SUBMODULE code reusable for a non-submodule case
dir: also check directories for matching pathspecs
dir: fix off-by-one error in match_pathspec_item
dir: fix typo in comment
t7300: add testcases showing failure to clean specified pathspecs
Currently, the tests for GIT_SKIP_TESTS do not cover the situation where
we skip an entire test suite. The tests also do not cover the situation
where we have GIT_SKIP_TESTS defined but the test suite does not match.
Add two test cases so we cover this blindspot.
Signed-off-by: Denton Liu <liu.denton@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In 6bd26f58ea (t4014: use test_line_count() where possible, 2019-08-27),
we converted many test cases to take advantage of the test_line_count()
function. In one conversion, we inverted the expected and actual value
as tested by test_line_count(). Although functionally correct, if
format-patch ever produced incorrect output, the debugging output would
be a bunch of hashes which would be difficult to debug.
Invert the expected and actual values provided to test_line_count() so
that if format-patch produces incorrect output, the debugging output
will be a list of human-readable files instead.
Helped-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Denton Liu <liu.denton@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In ef283b3699 ("apply: make parse_git_diff_header public", 2019-07-11)
the 'parse_git_diff_header' function was made public and useable by
callers outside of apply.c.
However it was missed that its (then) only caller, 'find_header' did
some error handling, and completing 'struct patch' appropriately.
range-diff then started using this function, and tried to handle this
appropriately itself, but fell short in some cases. This in turn
would lead to range-diff segfaulting when there are mode-only changes
in a range.
Move the error handling and completing of the struct into the
'parse_git_diff_header' function, so other callers can take advantage
of it. This fixes the segfault in 'git range-diff'.
Reported-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Thomas Gummerer <t.gummerer@gmail.com>
Acked-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Empty arguments passed on the command line can be represented by
a '', however sq_quote_buf_pretty was incorrectly dropping these
arguments altogether. Fix this problem by ensuring that such
arguments are emitted as '' instead.
Signed-off-by: Garima Singh <garima.singh@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A bug in merge-recursive code that triggers when a branch with a
symbolic link is merged with a branch that replaces it with a
directory has been fixed.
* jt/merge-recursive-symlink-is-not-a-dir-in-way:
merge-recursive: symlink's descendants not in way
The code to parse and use the commit-graph file has been made more
robust against corrupted input.
* tb/commit-graph-harden:
commit-graph.c: handle corrupt/missing trees
commit-graph.c: handle commit parsing errors
t/t5318: introduce failing 'git commit-graph write' tests
The cache-tree code has been taught to be less aggressive in
attempting to see if a tree object it computed already exists in
the repository.
* jt/cache-tree-avoid-lazy-fetch-during-merge:
cache-tree: do not lazy-fetch tentative tree
The object name parser for "Nth parent" syntax has been made more
robust against integer overflows.
* rs/nth-parent-parse:
sha1-name: check for overflow of N in "foo^N" and "foo~N"
rev-parse: demonstrate overflow of N for "foo^N" and "foo~N"
The "upload-pack" (the counterpart of "git fetch") needs to disable
commit-graph when responding to a shallow clone/fetch request, but
the way this was done made Git panic, which has been corrected.
* jk/disable-commit-graph-during-upload-pack:
upload-pack: disable commit graph more gently for shallow traversal
commit-graph: bump DIE_ON_LOAD check to actual load-time
"git log --decorate-refs-exclude=<pattern>" was incorrectly
overruled when the "--simplify-by-decoration" option is used, which
has been corrected.
* rs/simplify-by-deco-with-deco-refs-exclude:
log-tree: call load_ref_decorations() in get_name_decoration()
log: test --decorate-refs-exclude with --simplify-by-decoration
The name of the blob object that stores the filter specification
for sparse cloning/fetching was interpreted in a wrong place in the
code, causing Git to abort.
* jk/partial-clone-sparse-blob:
list-objects-filter: use empty string instead of NULL for sparse "base"
list-objects-filter: give a more specific error sparse parsing error
list-objects-filter: delay parsing of sparse oid
t5616: test cloning/fetching with sparse:oid=<oid> filter
"git stash" learned to write refreshed index back to disk.
* tg/stash-refresh-index:
stash: make sure to write refreshed cache
merge: use refresh_and_write_cache
factor out refresh_and_write_cache function
Comments stating that "struct hashmap_entry" must be the first
member in a struct are no longer valid.
Suggested-by: Phillip Wood <phillip.wood123@gmail.com>
Signed-off-by: Eric Wong <e@80x24.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since these macros already take a `keyvar' pointer of a known type,
we can rely on OFFSETOF_VAR to get the correct offset without
relying on non-portable `__typeof__' and `offsetof'.
Argument order is also rearranged, so `keyvar' and `member' are
sequential as they are used as: `keyvar->member'
Signed-off-by: Eric Wong <e@80x24.org>
Reviewed-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
While we cannot rely on a `__typeof__' operator being portable
to use with `offsetof'; we can calculate the pointer offset
using an existing pointer and the address of a member using
pointer arithmetic for compilers without `__typeof__'.
This allows us to simplify usage of hashmap iterator macros
by not having to specify a type when a pointer of that type
is already given.
In the future, list iterator macros (e.g. list_for_each_entry)
may also be implemented using OFFSETOF_VAR to save hackers the
trouble of using container_of/list_entry macros and without
relying on non-portable `__typeof__'.
v3: use `__typeof__' to avoid clang warnings
Signed-off-by: Eric Wong <e@80x24.org>
Reviewed-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
`hashmap_free_entries' behaves like `container_of' and passes
the offset of the hashmap_entry struct to the internal
`hashmap_free_' function, allowing the function to free any
struct pointer regardless of where the hashmap_entry field
is located.
`hashmap_free' no longer takes any arguments aside from
the hashmap itself.
Signed-off-by: Eric Wong <e@80x24.org>
Reviewed-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
And add *_entry variants to perform container_of as necessary
to simplify most callers.
Signed-off-by: Eric Wong <e@80x24.org>
Reviewed-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Inspired by list_for_each_entry in the Linux kernel.
Once again, these are somewhat compromised usability-wise
by compilers lacking __typeof__ support.
Signed-off-by: Eric Wong <e@80x24.org>
Reviewed-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Another step in eliminating the requirement of hashmap_entry
being the first member of a struct.
Signed-off-by: Eric Wong <e@80x24.org>
Reviewed-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Using `container_of' can be verbose and choosing names for
intermediate "struct hashmap_entry" pointers is a hard problem.
So introduce "*_entry" APIs inspired by similar linked-list
APIs in the Linux kernel.
Unfortunately, `__typeof__' is not portable C, so we need an
extra parameter to specify the type.
Signed-off-by: Eric Wong <e@80x24.org>
Reviewed-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is a step towards removing the requirement for
hashmap_entry being the first field of a struct.
Signed-off-by: Eric Wong <e@80x24.org>
Reviewed-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is less error-prone than "void *" as the compiler now
detects invalid types being passed.
Signed-off-by: Eric Wong <e@80x24.org>
Reviewed-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is less error-prone than "void *" as the compiler now
detects invalid types being passed.
Signed-off-by: Eric Wong <e@80x24.org>
Reviewed-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is less error-prone than "const void *" as the compiler
now detects invalid types being passed.
Signed-off-by: Eric Wong <e@80x24.org>
Reviewed-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
C compilers do type checking to make life easier for us. So
rely on that and update all hashmap_entry_init callers to take
"struct hashmap_entry *" to avoid future bugs while improving
safety and readability.
Signed-off-by: Eric Wong <e@80x24.org>
Reviewed-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Some tests print a file before searching for a pattern using
test_i18ngrep. This is useful when debugging tests with --verbose when
the pattern is not found as expected.
Since 63b1a175ee (t: make 'test_i18ngrep' more informative on failure,
2018-02-08) test_i18ngrep already shows the contents of a file that
doesn't match the expected pattern, though.
So don't bother doing the same unconditionally up-front. The contents
are not interesting if the expected pattern is found, and showing it
twice if it doesn't match is of no use.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The testsuite will eventually learn how to run using an algorithm other
than SHA-1. In preparation for this, teach the test_oid family of
functions how to look up the empty blob and empty tree values so they
can be used.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The test_oid function provides a mechanism for looking up hash algorithm
information, but it doesn't specify a way to discover the hash algorithm
name. Knowing this information is useful if one wants to invoke the
test-tool helper for the algorithm in use, such as in our pack
generation library.
While it's currently possible to inspect the global variable holding
this value, in the future we'll allow specifying an algorithm for
storage and an algorithm for display, so it's better to abstract this
value away. To assist with this, provide a named entry in the
algorithm-specific lookup table that prints the algorithm in use.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When the `--immediate` option is in effect, any test failure will
immediately exit the test script. Together with `--write-junit-xml`, we
will want the JUnit-style `.xml` file to be finalized (and not leave the
XML incomplete). Let's make it so.
This comes in particularly handy when trying to debug via Azure
Pipelines, where the JUnit-style XML is consumed to present the test
results in an informative and helpful way.
While at it, also handle the `error()` code path.
The only remaining code path that sets `GIT_EXIT_OK` happens whenever
the trash directory could not be set up, i.e. long before the JUnit XML
was written, therefore we should _not_ try to finalize that XML in that
case.
It is tempting to change the `immediate` code path to just hand off to
`error`, simplifying the code in the process. That would, however,
result in a change of behavior (an additional error message) in the test
suite, which is outside of the purview of the current patch series: its
goal is to allow building Git with Visual Studio and testing it with a
portable version of Git for Windows.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Git for Windows jumps through hoops to provide a development environment
that allows to build Git and to run its test suite. To that end, an
entire MSYS2 system, including GNU make and GCC is offered as "the Git
for Windows SDK". It does come at a price: an initial download of said
SDK weighs in with several hundreds of megabytes, and the unpacked SDK
occupies ~2GB of disk space.
A much more native development environment on Windows is Visual Studio.
To help contributors use that environment, we already have a Makefile
target `vcxproj` that generates a commit with project files (and other
generated files), and Git for Windows' `vs/master` branch is
continuously re-generated using that target.
The idea is to allow building Git in Visual Studio, and to run
individual tests using a Portable Git.
The one missing thing is a way to run the entire test suite: neither
`make` nor `prove` are required to run Git, therefore Git for Windows
does not support those commands in the Portable Git.
To help with that, add a simple test helper that exercises the
`run_processes_parallel()` function to allow for running test scripts in
parallel (which is really necessary, especially on Windows, as Git's
test suite takes such a long time to run).
This will also come in handy for the upcoming change to our Azure
Pipeline: we will use this helper in a Portable Git to test the Visual
Studio build of Git.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When Git wants to spawn a child Git process inside a worktree's
subdirectory while `GIT_DIR` is set, we need to take care of specifying
the work tree's top-level directory explicitly because it cannot be
discovered: the current directory is _not_ the top-level directory of
the work tree, and neither is it inside the parent directory of
`GIT_DIR`.
This fixes the problem where `git stash apply` would report pretty much
everything deleted or untracked when run inside a worktree's
subdirectory.
To make sure that we do not introduce the "reverse problem", i.e. when
`GIT_WORK_TREE` is defined but `GIT_DIR` is not, we simply make sure
that both are set.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
So far, `--jobs=<n>` only parallelizes submodule fetches/clones, not
`--multiple` fetches, which is unintuitive, given that the option's name
does not say anything about submodules in particular.
Let's change that. With this patch, also fetches from multiple remotes
are parallelized.
For backwards-compatibility (and to prepare for a use case where
submodule and multiple-remote fetches may need different parallelization
limits), the config setting `submodule.fetchJobs` still only controls
the submodule part of `git fetch`, while the newly-introduced setting
`fetch.parallel` controls both (but can be overridden for submodules
with `submodule.fetchJobs`).
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a new "discard" event type for trace2 event destinations. When the
trace2 file count check creates a sentinel file, it will include the
normal trace2 output in the sentinel, along with this new discard
event.
Writing this message into the sentinel file is useful for tracking how
often the file count check triggers in practice.
Bump up the event format version since we've added a new event type.
Signed-off-by: Josh Steadmon <steadmon@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
trace2 can write files into a target directory. With heavy usage, this
directory can fill up with files, causing difficulty for
trace-processing systems.
This patch adds a config option (trace2.maxFiles) to set a maximum
number of files that trace2 will write to a target directory. The
following behavior is enabled when the maxFiles is set to a positive
integer:
When trace2 would write a file to a target directory, first check
whether or not the traces should be discarded. Traces should be
discarded if:
* there is a sentinel file declaring that there are too many files
* OR, the number of files exceeds trace2.maxFiles.
In the latter case, we create a sentinel file named git-trace2-discard
to speed up future checks.
The assumption is that a separate trace-processing system is dealing
with the generated traces; once it processes and removes the sentinel
file, it should be safe to generate new trace files again.
The default value for trace2.maxFiles is zero, which disables the file
count check.
The config can also be overridden with a new environment variable:
GIT_TRACE2_MAX_FILES.
Signed-off-by: Josh Steadmon <steadmon@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The graph coloring logic for octopus merges currently has a bug. This
can be seen git.git with 74c7cfa875 (Merge of
http://members.cox.net/junkio/git-jc.git, 2005-05-05), whose second
child is 211232bae6 (Octopus merge of the following five patches.,
2005-05-05).
If one runs
git log --graph 74c7cfa875
one can see that the octopus merge is colored incorrectly. In
particular, the horizontal dashes are off by one color. Each horizontal
dash should be the color of the line to their bottom-right. Instead, they
are currently the color of the line to their bottom.
Demonstrate this breakage with a few sets of test cases. These test
cases should show not only simple cases of the bug occuring but trickier
situations that may not be handled properly in any attempt to fix the
bug.
While we're at it, include a passing test case as a canary in case an
attempt to fix the bug breaks existing operation.
Signed-off-by: Denton Liu <liu.denton@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In a future test case, we will be extending the commit graph. As a
result, explicitly list the tags that will generate the graph so that
when future additions are made, the current graph illustrations won't be
affected.
Signed-off-by: Denton Liu <liu.denton@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Before, the expect files of the test case were being generated in the
setup method. However, it would make more sense to generate these files
within the test cases that actually use them so that it's obvious to
future readers where the expected values are coming from.
Move the generation of the expect files in their own respective test
cases.
While we're at it, we want to establish a pattern in this test suite
that, firstly, a non-colored test case is given then, immediately after,
the colored version is given.
Switch test cases "log --graph with tricky octopus merge, no color" and
"log --graph with tricky octopus merge with colors" so that the "no
color" version appears first.
This patch is best viewed with `--color-moved`.
Signed-off-by: Denton Liu <liu.denton@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the previous commit, we extended test_merge() so that it could
perform octopus merges. Now that the restriction is lifted, use
test_merge() to perform the octopus merge instead of manually
duplicating test_merge() functionality.
Signed-off-by: Denton Liu <liu.denton@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Currently test_merge() only allows developers to merge in one branch.
However, this restriction is artificial and there is no reason why it
needs to be this way.
Extend test_merge() to allow the specification of multiple branches so
that octopus merges can be performed.
Signed-off-by: Denton Liu <liu.denton@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Multiple changes here:
* add a test for a tag of a blob
* add a test for a tag of a tag of a commit
* add a comment to the tests for (possibly nested) tags of trees,
making it clear that these tests are doing much less than you might
expect
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a new option, --mark-tags, which will output mark identifiers with
each tag object. This improves the incremental export story with
--export-marks since it will allow us to record that annotated tags have
been exported, and it is also needed as a step towards supporting nested
tags.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
fast-import has support for both an --import-marks flag and an
--import-marks-if-exists flag; the latter of which will not die() if the
file does not exist. fast-export only had support for an --import-marks
flag; add an --import-marks-if-exists flag for consistency.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
fast-export and fast-import have nice --import-marks flags which allow
for incremental migrations. However, if there is a mark in
fast-export's file of marks without a corresponding mark in the one for
fast-import, then we run the risk that fast-export tries to send new
objects relative to the mark it knows which fast-import does not,
causing fast-import to fail.
This arises in practice when there is a filter of some sort running
between the fast-export and fast-import processes which prunes some
commits programmatically. Provide such a filter with the ability to
alias pruned commits to their most recent non-pruned ancestor.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Mark identifiers are used in fast-export and fast-import to provide a
label to refer to earlier content. Blobs are given labels because they
need to be referenced in the commits where they first appear with a
given filename, and commits are given labels because they can be the
parents of other commits. Tags were never given labels, probably
because they were viewed as unnecessary, but that presents two problems:
1. It leaves us without a way of referring to previous tags if we
want to create a tag of a tag (or higher nestings).
2. It leaves us with no way of recording that a tag has already been
imported when using --export-marks and --import-marks.
Fix these problems by allowing an optional mark label for tags.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If our input stream includes a tag which is later deleted, we were not
properly deleting it. We did have a step which would delete it, but we
left a tag in the tag list noting that it needed to be updated, and the
updating of annotated tags occurred AFTER ref deletion. So, when we
record that a tag needs to be deleted, also remove it from the list of
annotated tags to update.
While this has likely been something that has not happened in practice,
it will come up more in order to support nested tags. For nested tags,
we either need to give temporary names to the intermediate tags and then
delete them, or else we need to use the final name for the intermediate
tags. If we use the final name for the intermediate tags, then in order
to keep the sanity check that someone doesn't try to update the same tag
twice, we need to delete the ref after creating the intermediate tag.
So, either way nested tags imply the need to delete temporary inner tag
references.
Helped-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Report the current hunk count and total number of hunks for the
current file in the prompt. Also adjust the expected output in
some tests to match.
Signed-off-by: Kunal Tyagi <tyagi.kunal@live.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When parsing the diffs, `range-diff` expects to see the prefixes `a/`
and `b/` in the diff headers.
These prefixes can be forced off via the config setting
`diff.noprefix=true`. As `range-diff` is not prepared for that
situation, this will cause a segmentation fault.
Let's avoid that by passing the `--no-prefix` option to the `git log`
process that generates the diffs that `range-diff` wants to parse.
And of course expect the output to have no prefixes, then.
Reported-by: Michal Suchánek <msuchanek@suse.de>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The test was originally designed for the case where user reported
that setting GIT_SSH to a .bat file with spaces in path fails on
Windows: https://github.com/git-for-windows/git/issues/692
The test has two different problems:
1. It succeeds with AND without fix eb7c7863 that addressed user's
problem. This happens because the core problem was misunderstood,
leading to conclusion that git is unable to start any programs with
spaces in path on Win7. But in fact
a) Bug only affected cmd.exe scripts, such as .bat scripts
b) Bug only happened when cmd.exe received at least two quoted args
c) Bug happened on any Windows (verified on Win10).
Therefore, correct test must involve .bat script and two quoted args.
2. In Visual Studio build, it fails to run, because 'test-fake-ssh.exe'
is copied away from its dependencies 'libiconv.dll' and 'zlib1.dll'.
Fix both problems by using .bat script instead of 'test-fake-ssh.exe'.
NOTE: With this change, the test now correctly fails without eb7c7863.
Signed-off-by: Alexandr Miloslavskiy <alexandr.miloslavskiy@syntevo.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In commit 743474cbfa ("merge-recursive: provide a better label for
diff3 common ancestor", 2019-08-17), the label for the common ancestor
was changed from always being
"merged common ancestors"
to instead be based on the number of merge bases:
>=2: "merged common ancestors"
1: <abbreviated commit hash>
0: "<empty tree>"
Unfortunately, this did not take into account that when we have a single
merge base, that merge base could be fake or constructed. In such
cases, this resulted in a label of "00000000". Of course, the previous
label of "merged common ancestors" was also misleading for this case.
Since we have an API that is explicitly about creating fake merge base
commits in merge_recursive_generic(), we should provide a better label
when using that API with one merge base. So, when
merge_recursive_generic() is called with one merge base, set the label
to:
"constructed merge base"
Note that callers of merge_recursive_generic() include the builtin
commands git-am (in combination with git apply --build-fake-ancestor),
git-merge-recursive, and git-stash.
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Previously, when promisor_remote_move_to_tail() is called for a
promisor_remote which is currently the final element in promisors, a
cycle is created in the promisors linked list. This cycle leads to a
double free later on in promisor_remote_clear() when the final element
of the promisors list is removed: promisors is set to promisors->next (a
no-op, as promisors->next == promisors); the previous value of promisors
is free()'d; then the new value of promisors (which is equal to the
previous value of promisors) is also free()'d. This double-free error
was unrecoverable for the user without removing the filter or re-cloning
the repo and hoping to miss this edge case.
Now, when promisor_remote_move_to_tail() would be a no-op, just do a
no-op. In cases of promisor_remote_move_to_tail() where r is not already
at the tail of the list, it works as before.
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Emily Shaffer <emilyshaffer@google.com>
Acked-by: Christian Couder <christian.couder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commits 404ebceda0 ("dir: also check directories for matching
pathspecs", 2019-09-17) and 89a1f4aaf7 ("dir: if our pathspec might
match files under a dir, recurse into it", 2019-09-17) added calls to
match_pathspec() and do_match_pathspec() passing along their pathspec
parameter. Both match_pathspec() and do_match_pathspec() assume the
pathspec argument they are given is non-NULL. It turns out that
unpack-tree.c's verify_clean_subdirectory() calls read_directory() with
pathspec == NULL, and it is possible on case insensitive filesystems for
that NULL to make it to these new calls to match_pathspec() and
do_match_pathspec(). Add appropriate checks on the NULLness of pathspec
to avoid a segfault.
In case the negation throws anyone off (one of the calls was to
do_match_pathspec() while the other was to !match_pathspec(), yet no
negation of the NULLness of pathspec is used), there are two ways to
understand the differences:
* The code already handled the pathspec == NULL cases before this
series, and this series only tried to change behavior when there was
a pathspec, thus we only want to go into the if-block if pathspec is
non-NULL.
* One of the calls is for whether to recurse into a subdirectory, the
other is for after we've recursed into it for whether we want to
remove the subdirectory itself (i.e. the subdirectory didn't match
but something under it could have). That difference in situation
leads to the slight differences in logic used (well, that and the
slightly unusual fact that we don't want empty pathspecs to remove
untracked directories by default).
Denton found and analyzed one issue and provided the patch for the
match_pathspec() call, SZEDER figured out why the issue only reproduced
for some folks and not others and provided the testcase, and I looked
through the remainder of the series and noted the do_match_pathspec()
call that should have the same check.
Co-authored-by: Denton Liu <liu.denton@gmail.com>
Co-authored-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A configuration variable tells "git fetch" to write the commit
graph after finishing.
* ds/commit-graph-on-fetch:
fetch: add fetch.writeCommitGraph config setting
"git rebase --autostash <upstream> <branch>", when <branch> is
different from the current branch, incorrectly moved the tip of the
current branch, which has been corrected.
* bw/rebase-autostash-keep-current-branch:
builtin/rebase.c: Remove pointless message
builtin/rebase.c: make sure the active branch isn't moved when autostashing
Output from trace2 subsystem is formatted more prettily now.
* jh/trace2-pretty-output:
trace2: cleanup whitespace in perf format
trace2: cleanup whitespace in normal format
quote: add sq_append_quote_argv_pretty()
trace2: trim trailing whitespace in normal format error message
trace2: remove dead code in maybe_add_string_va()
trace2: trim whitespace in region messages in perf target format
trace2: cleanup column alignment in perf target format
"git rebase --keep-base <upstream>" tries to find the original base
of the topic being rebased and rebase on top of that same base,
which is useful when running the "git rebase -i" (and its limited
variant "git rebase -x").
The command also has learned to fast-forward in more cases where it
can instead of replaying to recreate identical commits.
* dl/rebase-i-keep-base:
rebase: teach rebase --keep-base
rebase tests: test linear branch topology
rebase: fast-forward --fork-point in more cases
rebase: fast-forward --onto in more cases
rebase: refactor can_fast_forward into goto tower
t3432: test for --no-ff's interaction with fast-forward
t3432: distinguish "noop-same" v.s. "work-same" in "same head" tests
t3432: test rebase fast-forward behavior
t3431: add rebase --fork-point tests
The command line completion support (in contrib/) learned about the
"--skip" option of "git revert" and "git cherry-pick".
* dl/complete-cherry-pick-revert-skip:
status: mention --skip for revert and cherry-pick
completion: add --skip for cherry-pick and revert
completion: merge options for cherry-pick and revert
Various fixes to codepaths gcc 9 had trouble following dataflow.
* jk/misc-uninitialized-fixes:
pack-objects: drop packlist index_pos optimization
test-read-cache: drop namelen variable
diff-delta: set size out-parameter to 0 for NULL delta
bulk-checkin: zero-initialize hashfile_checkpoint
pack-objects: use object_id in packlist_alloc()
git-am: handle missing "author" when parsing commit
Fix an earlier regression in the test suite, which mistakenly
stopped running HTTPD tests.
* sg/git-test-boolean:
ci: restore running httpd tests
t/lib-git-svn.sh: check GIT_TEST_SVN_HTTPD when running SVN HTTP tests
Start discouraging the use of "git filter-branch".
* en/filter-branch-deprecation:
t9902: use a non-deprecated command for testing
Recommend git-filter-repo instead of git-filter-branch
t6006: simplify, fix, and optimize empty message test
Fix an earlier regression to "git push --all" which should have
been forbidden when the target remote repository is set to be a
mirror.
* tg/push-all-in-mirror-forbidden:
push: disallow --all and refspecs when remote.<name>.mirror is set
The documentation and tests for "git format-patch" have been
cleaned up.
* dl/format-patch-doc-test-cleanup:
config/format.txt: specify default value of format.coverLetter
Doc: add more detail for git-format-patch
t4014: stop losing return codes of git commands
t4014: remove confusing pipe in check_threading()
t4014: use test_line_count() where possible
t4014: let sed open its own files
t4014: drop redirections to /dev/null
t4014: use indentable here-docs
t4014: remove spaces after redirect operators
t4014: use sq for test case names
t4014: move closing sq onto its own line
t4014: s/expected/expect/
t4014: drop unnecessary blank lines from test cases
fast-export allows specifying revision ranges, which can be used to
export a tag without exporting the commit it tags. fast-export handled
this rather poorly: it would emit a "from :0" directive. Since marks
start at 1 and increase, this means it refers to an unknown commit and
fast-import will choke on the input.
When we are unable to look up a mark for the object being tagged, use a
"from $HASH" directive instead to fix this problem.
Note that this is quite similar to the behavior fast-export exhibits
with commits and parents when --reference-excluded-parents is passed
along with an excluded commit range. For tags of excluded commits we do
not require the --reference-excluded-parents flag because we always have
to tag something. By contrast, when dealing with commits, pruning a
parent is always a viable option, so we need the flag to specify that
parent pruning is not wanted. (It is slightly weird that
--reference-excluded-parents isn't the default with a separate
--prune-excluded-parents flag, but backward compatibility concerns
resulted in the current defaults.)
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
After I discovered that UTF-16-LE-BOM test was buggy, I decided that
better tests are required. Possibly the best option here is to compare
git results against hardcoded ground truth.
The new tests also cover more interesting chars where (ANSI != UTF-8).
Signed-off-by: Alexandr Miloslavskiy <alexandr.miloslavskiy@syntevo.com>
Reviewed-by: Torsten Bögershausen <tboegi@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
According to its name, the test is designed for UTF-16-LE-BOM.
However, possibly due to copy&paste oversight, it was using UTF-32.
While the test succeeds (extra \000\000 are interpreted as NUL),
I myself had an unrelated problem which caused the test to fail.
When analyzing the failure I was quite puzzled by the fact that the
test is obviously buggy. And it seems that I'm not alone:
https://public-inbox.org/git/CAH8yC8kSakS807d4jc_BtcUJOrcVT4No37AXSz=jePxhw-o9Dg@mail.gmail.com/T/#u
Fix the test to follow its original intention.
Signed-off-by: Alexandr Miloslavskiy <alexandr.miloslavskiy@syntevo.com>
Reviewed-by: Torsten Bögershausen <tboegi@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When 'git name-rev' is invoked with commit-ish parameters, it tries to
save some work, and doesn't visit commits older than the committer
date of the oldest given commit minus a one day worth of slop. Since
our 'timestamp_t' is an unsigned type, this leads to a timestamp
underflow when the committer date of the oldest given commit is within
a day of the UNIX epoch. As a result the cutoff timestamp ends up
far-far in the future, and 'git name-rev' doesn't visit any commits,
and names each given commit as 'undefined'.
Check whether subtracting the slop from the oldest committer date
would lead to an underflow, and use no cutoff in that case. We don't
have a TIME_MIN constant, dddbad728c (timestamp_t: a new data type for
timestamps, 2017-04-26) didn't add one, so do it now.
Note that the type of the cutoff timestamp variable used to be signed
before 5589e87fd8 (name-rev: change a "long" variable to timestamp_t,
2017-05-20). The behavior was still the same even back then, but the
underflow didn't happen when substracting the slop from the oldest
committer date, but when comparing the signed cutoff timestamp with
unsigned committer dates in name_rev(). IOW, this underflow bug is as
old as 'git name-rev' itself.
Helped-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This patch conceptually reverts 44103f4197 (t/helper: ignore
everything but sources, 2017-12-12). Back in those days we did have a
lot of separate test helper executables under 't/helper', and its
'.gitignore' did get out of sync every once in a while.
Since then, however, most of those separate executables were
integrated into a single 'test-tool' command [1], and new test helpers
are added as new subcommands, so the chances of that '.gitignore'
getting out of sync again are much lower. And even if a contributor
were not careful enough and submits a patch that adds a new executable
under 't/helper' but forgets to update '.gitignore' accordingly, our
CI builds would catch it in a timely manner [2].
Ignoring everything but sources has the drawback that building an
older version of Git (e.g. during bisecting) creates all those
executables, and after going back to e.g. current 'master' the usual
cleanup commands like 'make clean' or 'git clean -fd' don't remove
them (the former doesn't know about them, and the latter doesn't
remove ignored files).
So let's ignore only the executable files under 't/helper/, i.e.
'test-tool' and the three other remaining executables that could not
be integrated into 'test-tool' (no need to ignore object files, as
they are already ignored by our toplevel '.gitignore').
[1] The topic starting with efd71f8913 (t/helper: add an empty
test-tool program, 2018-03-24), and leading up to the merge commit
27f25845cf (Merge branch 'nd/combined-test-helper', 2018-04-11).
[2] b92cb86ea1 (travis-ci: check that all build artifacts are
.gitignore-d, 2017-12-31)
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When the working tree has:
- bar (directory)
- bar/file (file)
- foo (symlink to .)
(note that lstat() for "foo/bar" would tell us that it is a directory)
and the user merges a commit that deletes the foo symlink and instead
contains:
- bar (directory, as above)
- bar/file (file, as above)
- foo (directory)
- foo/bar (file)
the merge should happen without requiring user intervention. However,
this does not happen.
This is because dir_in_way(), when checking the working tree, thinks
that "foo/bar" is a directory. But a symlink should be treated much the
same as a file: since dir_in_way() is only checking to see if there is a
directory in the way, we don't want symlinks in leading paths to
sometimes cause dir_in_way() to return true.
Teach dir_in_way() to also check for symlinks in leading paths before
reporting whether a directory is in the way.
Helped-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When converting stash into C, calls to 'git update-index --refresh'
were replaced with the 'refresh_cache()' function. That is fine as
long as the index is only needed in-core, and not re-read from disk.
However in many cases we do actually need the refreshed index to be
written to disk, for example 'merge_recursive_generic()' discards the
in-core index before re-reading it from disk, and in the case of 'apply
--quiet', the 'refresh_cache()' we currently have is pointless without
writing the index to disk.
Always write the index after refreshing it to ensure there are no
regressions in this compared to the scripted stash. In the future we
can consider avoiding the write where possible after making sure none
of the subsequent calls actually need the refreshed cache, and it is
not expected to be refreshed after stash exits or it is written
somewhere else already.
Reported-by: Jeff King <peff@peff.net>
Signed-off-by: Thomas Gummerer <t.gummerer@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add --[no-]progress to git commit-graph write and verify.
The progress feature was introduced in 7b0f229
("commit-graph write: add progress output", 2018-09-17) but
the ability to opt-out was overlooked.
Signed-off-by: Garima Singh <garima.singh@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Remove the reference to the GIT_TEST_DATE_NOW which is done in date.c.
We can't get rid of the "x" variable, since it serves as a generic
scratch variable for parsing later in the function.
Signed-off-by: Stephen P. Smith <ischis2@cox.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The http transport lacked some optimization the native transports
learned to avoid unnecessary ref advertisement, which has been
corrected.
* jt/avoid-ls-refs-with-http:
transport: teach all vtables to allow fetch first
transport-helper: skip ls-refs if unnecessary
The list-objects-filter API (used to create a sparse/lazy clone)
learned to take a combined filter specification.
* md/list-objects-filter-combo:
list-objects-filter-options: make parser void
list-objects-filter-options: clean up use of ALLOC_GROW
list-objects-filter-options: allow mult. --filter
strbuf: give URL-encoding API a char predicate fn
list-objects-filter-options: make filter_spec a string_list
list-objects-filter-options: move error check up
list-objects-filter: implement composite filters
list-objects-filter-options: always supply *errbuf
list-objects-filter: put omits set in filter struct
list-objects-filter: encapsulate filter components
Teach the lazy clone machinery that there can be more than one
promisor remote and consult them in order when downloading missing
objects on demand.
* cc/multi-promisor:
Move core_partial_clone_filter_default to promisor-remote.c
Move repository_format_partial_clone to promisor-remote.c
Remove fetch-object.{c,h} in favor of promisor-remote.{c,h}
remote: add promisor and partial clone config to the doc
partial-clone: add multiple remotes in the doc
t0410: test fetching from many promisor remotes
builtin/fetch: remove unique promisor remote limitation
promisor-remote: parse remote.*.partialclonefilter
Use promisor_remote_get_direct() and has_promisor_remote()
promisor-remote: use repository_format_partial_clone
promisor-remote: add promisor_remote_reinit()
promisor-remote: implement promisor_remote_get_direct()
Add initial support for many promisor remotes
fetch-object: make functions return an error code
t0410: remove pipes after git commands
Optimize unnecessary full-tree diff away from "git log -L" machinery.
* sg/line-log-tree-diff-optim:
line-log: avoid unnecessary full tree diffs
line-log: extract pathspec parsing from line ranges into a helper function
Command line completion updates for "git -c var.name=val"
* sg/complete-configuration-variables:
completion: complete config variables and values for 'git clone --config='
completion: complete config variables names and values for 'git clone -c'
completion: complete values of configuration variables after 'git -c var='
completion: complete configuration sections and variable names for 'git -c'
completion: split _git_config()
completion: simplify inner 'case' pattern in __gitcomp()
completion: use 'sort -u' to deduplicate config variable names
completion: deduplicate configuration sections
completion: add tests for 'git config' completion
completion: complete more values of more 'color.*' configuration variables
completion: fix a typo in a comment
A new "pre-merge-commit" hook has been introduced.
* js/pre-merge-commit-hook:
merge: --no-verify to bypass pre-merge-commit hook
git-merge: honor pre-merge-commit hook
merge: do no-verify like commit
t7503: verify proper hook execution
"git rebase --rebase-merges" learned to drive different merge
strategies and pass strategy specific options to them.
* js/rebase-r-strategy:
t3427: accelerate this test by using fast-export and fast-import
rebase -r: do not (re-)generate root commits with `--root` *and* `--onto`
t3418: test `rebase -r` with merge strategies
t/lib-rebase: prepare for testing `git rebase --rebase-merges`
rebase -r: support merge strategies other than `recursive`
t3427: fix another incorrect assumption
t3427: accommodate for the `rebase --merge` backend having been replaced
t3427: fix erroneous assumption
t3427: condense the unnecessarily repetitive test cases into three
t3427: move the `filter-branch` invocation into the `setup` case
t3427: simplify the `setup` test case significantly
t3427: add a clarifying comment
rebase: fold git-rebase--common into the -p backend
sequencer: the `am` and `rebase--interactive` scripts are gone
.gitignore: there is no longer a built-in `git-rebase--interactive`
t3400: stop referring to the scripted rebase
Drop unused git-rebase--am.sh
Users expect files in a nested git repository to be left alone unless
sufficiently forced (with two -f's). Unfortunately, in certain
circumstances, git would delete both tracked (and possibly dirty) files
and untracked files within a nested repository. To explain how this
happens, let's contrast a couple cases. First, take the following
example setup (which assumes we are already within a git repo):
git init nested
cd nested
>tracked
git add tracked
git commit -m init
>untracked
cd ..
In this setup, everything works as expected; running 'git clean -fd'
will result in fill_directory() returning the following paths:
nested/
nested/tracked
nested/untracked
and then correct_untracked_entries() would notice this can be compressed
to
nested/
and then since "nested/" is a directory, we would call
remove_dirs("nested/", ...), which would
check is_nonbare_repository_dir() and then decide to skip it.
However, if someone also creates an ignored file:
>nested/ignored
then running 'git clean -fd' would result in fill_directory() returning
the same paths:
nested/
nested/tracked
nested/untracked
but correct_untracked_entries() will notice that we had ignored entries
under nested/ and thus simplify this list to
nested/tracked
nested/untracked
Since these are not directories, we do not call remove_dirs() which was
the only place that had the is_nonbare_repository_dir() safety check --
resulting in us deleting both the untracked file and the tracked (and
possibly dirty) file.
One possible fix for this issue would be walking the parent directories
of each path and checking if they represent nonbare repositories, but
that would be wasteful. Even if we added caching of some sort, it's
still a waste because we should have been able to check that "nested/"
represented a nonbare repository before even descending into it in the
first place. Add a DIR_SKIP_NESTED_GIT flag to dir_struct.flags and use
it to prevent fill_directory() and friends from descending into nested
git repos.
With this change, we also modify two regression tests added in commit
91479b9c72 ("t7300: add tests to document behavior of clean and nested
git", 2015-06-15). That commit, nor its series, nor the six previous
iterations of that series on the mailing list discussed why those tests
coded the expectation they did. In fact, it appears their purpose was
simply to test _existing_ behavior to make sure that the performance
changes didn't change the behavior. However, these two tests directly
contradicted the manpage's claims that two -f's were required to delete
files/directories under a nested git repository. While one could argue
that the user gave an explicit path which matched files/directories that
were within a nested repository, there's a slippery slope that becomes
very difficult for users to understand once you go down that route (e.g.
what if they specified "git clean -f -d '*.c'"?) It would also be hard
to explain what the exact behavior was; avoid such problems by making it
really simple.
Also, clean up some grammar errors describing this functionality in the
git-clean manpage.
Finally, there are still a couple bugs with -ffd not cleaning out enough
(e.g. missing the nested .git) and with -ffdX possibly cleaning out the
wrong files (paying attention to outer .gitignore instead of inner).
This patch does not address these cases at all (and does not change the
behavior relative to those flags), it only fixes the handling when given
a single -f. See
https://public-inbox.org/git/20190905212043.GC32087@szeder.dev/ for more
discussion of the -ffd[X?] bugs.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The -d flag pre-dated git-clean's ability to have paths specified. As
such, the default for git-clean was to only remove untracked files in
the current directory, and -d existed to allow it to recurse into
subdirectories.
The interaction of paths and the -d option appears to not have been
carefully considered, as evidenced by numerous bugs and a dearth of
tests covering such pairings in the testsuite. The definition turns out
to be important, so let's look at some of the various ways one could
interpret the -d option:
A) Without -d, only look in subdirectories which contain tracked
files under them; with -d, also look in subdirectories which
are untracked for files to clean.
B) Without specified paths from the user for us to delete, we need to
have some kind of default, so...without -d, only look in
subdirectories which contain tracked files under them; with -d,
also look in subdirectories which are untracked for files to clean.
The important distinction here is that choice B says that the presence
or absence of '-d' is irrelevant if paths are specified. The logic
behind option B is that if a user explicitly asked us to clean a
specified pathspec, then we should clean anything that matches that
pathspec. Some examples may clarify. Should
git clean -f untracked_dir/file
remove untracked_dir/file or not? It seems crazy not to, but a strict
reading of option A says it shouldn't be removed. How about
git clean -f untracked_dir/file1 tracked_dir/file2
or
git clean -f untracked_dir_1/file1 untracked_dir_2/file2
? Should it remove either or both of these files? Should it require
multiple runs to remove both the files listed? (If this sounds like a
crazy question to even ask, see the commit message of "t7300: Add some
testcases showing failure to clean specified pathspecs" added earlier in
this patch series.) What if -ffd were used instead of -f -- should that
allow these to be removed? Should it take multiple invocations with
-ffd? What if a glob (such as '*tracked*') were used instead of
spelling out the directory names? What if the filenames involved globs,
such as
git clean -f '*.o'
or
git clean -f '*/*.o'
?
The current documentation actually suggests a definition that is
slightly different than choice A, and the implementation prior to this
series provided something radically different than either choices A or
B. (The implementation, though, was clearly just buggy). There may be
other choices as well. However, for almost any given choice of
definition for -d that I can think of, some of the examples above will
appear buggy to the user. The only case that doesn't have negative
surprises is choice B: treat a user-specified path as a request to clean
all untracked files which match that path specification, including
recursing into any untracked directories.
Change the documentation and basic implementation to use this
definition.
There were two regression tests that indirectly depended on the current
implementation, but neither was about subdirectory handling. These two
tests were introduced in commit 5b7570cfb4 ("git-clean: add tests for
relative path", 2008-03-07) which was solely created to add coverage for
the changes in commit fb328947c8e ("git-clean: correct printing relative
path", 2008-03-07). Both tests specified a directory that happened to
have an untracked subdirectory, but both were only checking that the
resulting printout of a file that was removed was shown with a relative
path. Update these tests appropriately.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
For git clean, if a directory is entirely untracked and the user did not
specify -d (corresponding to DIR_SHOW_IGNORED_TOO), then we usually do
not want to remove that directory and thus do not recurse into it.
However, if the user manually specified specific (or even globbed) paths
somewhere under that directory to remove, then we need to recurse into
the directory to make sure we remove the relevant paths under that
directory as the user requested.
Note that this does not mean that the recursed-into directory will be
added to dir->entries for later removal; as of a few commits earlier in
this series, there is another more strict match check that is run after
returning from a recursed-into directory before deciding to add it to the
list of entries. Therefore, this will only result in files underneath
the given directory which match one of the pathspecs being added to the
entries list.
Two notes of potential interest to future readers:
* If we wanted to only recurse into a directory when it is specifically
matched rather than matched-via-glob (e.g. '*.c'), then we could do
so via making the final non-zero return in match_pathspec_item be
MATCHED_RECURSIVELY instead of MATCHED_RECURSIVELY_LEADING_PATHSPEC.
(Note that the relative order of MATCHED_RECURSIVELY_LEADING_PATHSPEC
and MATCHED_RECURSIVELY are important for such a change.) I was
leaving open that possibility while writing an RFC asking for the
behavior we want, but even though we don't want it, that knowledge
might help you understand the code flow better.
* There is a growing amount of logic in read_directory_recursive() for
deciding whether to recurse into a subdirectory. However, there is a
comment immediately preceding this logic that says to recurse if
instructed by treat_path(). It may be better for the logic in
read_directory_recursive() to ultimately be moved to treat_path() (or
another function it calls, such as treat_directory()), but I have
left that for someone else to tackle in the future.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Even if a directory doesn't match a pathspec, it is possible, depending
on the precise pathspecs, that some file underneath it might. So we
special case and recurse into the directory for such situations. However,
we previously always added any untracked directory that we recursed into
to the list of untracked paths, regardless of whether the directory
itself matched the pathspec.
For the case of git-clean and a set of pathspecs of "dir/file" and "more",
this caused a problem because we'd end up with dir entries for both of
"dir"
"dir/file"
Then correct_untracked_entries() would try to helpfully prune duplicates
for us by removing "dir/file" since it's under "dir", leaving us with
"dir"
Since the original pathspec only had "dir/file", the only entry left
doesn't match and leaves nothing to be removed. (Note that if only one
pathspec was specified, e.g. only "dir/file", then the common_prefix_len
optimizations in fill_directory would cause us to bypass this problem,
making it appear in simple tests that we could correctly remove manually
specified pathspecs.)
Fix this by actually checking whether the directory we are about to add
to the list of dir entries actually matches the pathspec; only do this
matching check after we have already returned from recursing into the
directory.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Someone brought me a testcase where multiple git-clean invocations were
required to clean out unwanted files:
mkdir d{1,2}
touch d{1,2}/ut
touch d1/t && git add d1/t
With this setup, the user would need to run
git clean -ffd */ut
twice to delete both ut files.
A little testing showed some interesting variants:
* If only one of those two ut files existed (either one), then only one
clean command would be necessary.
* If both directories had tracked files, then only one git clean would
be necessary to clean both files.
* If both directories had no tracked files then the clean command above
would never clean either of the untracked files despite the pathspec
explicitly calling both of them out.
A bisect showed that the failure to clean out the files started with
commit cf424f5fd8 ("clean: respect pathspecs with "-d", 2014-03-10).
However, that pointed to a separate issue: while the "-d" flag was used
by the original user who showed me this problem, that flag should have
been irrelevant to this problem. Testing again without the "-d" flag
showed that the same buggy behavior exists without using that flag, and
has in fact existed since before cf424f5fd8.
Although these problems at first are perceived to be different (e.g.
never clearing out the requested files vs. taking multiple invocations
to get everything cleared out), they are actually just different
manifestations of the same problem. The case with multiple directories
that have no tracked files is the more general case; solving it will
solve all the others. So, I concentrate on it. Add testcases showing
that multiple untracked files within entirely untracked directories
cannot be cleaned when specifying these files to git clean via
pathspecs.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
'progress.c' has seen a few fixes recently [1], and, unfortunately,
some of those fixes required further fixes [2]. It seems it's time to
have a few tests focusing on the subtleties of the progress display.
Add the 'test-tool progress' subcommand to help testing the progress
display, reading instructions from standard input and turning them
into calls to the display_progress() and display_throughput()
functions with the given parameters.
The progress display is, however, critically dependent on timing,
because it's only updated once every second or, if the toal is known
in advance, every 1%, and there is the throughput rate as well. These
make the progress display far too undeterministic for testing as-is.
To address this, add a few testing-specific variables and functions to
'progress.c', allowing the the new test helper to:
- Disable the triggered-every-second SIGALRM and set the
'progress_update' flag explicitly based in the input instructions.
This way the progress line will be updated deterministically when
the test wants it to be updated.
- Specify the time elapsed since start_progress() to make the
throughput rate calculations deterministic.
Add the new test script 't0500-progress-display.sh' to check a few
simple cases with and without throughput, and that a shorter progress
line properly covers up the previously displayed line in different
situations.
[1] See commits 545dc345eb (progress: break too long progress bar
lines, 2019-04-12) and 9f1fd84e15 (progress: clear previous
progress update dynamically, 2019-04-12).
[2] 1aed1a5f25 (progress: avoid empty line when breaking the progress
line, 2019-05-19)
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This reverts commit 5b12e3123b (progress: use term_clear_line(),
2019-06-24), because covering up the entire last line while refreshing
the progress line caused unexpected problems during 'git
clone/fetch/push':
$ git clone ssh://localhost/home/szeder/src/tmp/linux.git/
Cloning into 'linux'...
remote:
remote:
remote:
remote: Enumerating objects: 999295
The length of the progress bar line can shorten when it includes
throughput and the unit changes, or when its length exceeds the width
of the terminal and is broken into two lines. In these cases the
previously displayed longer progress line should be covered up,
because otherwise the leftover characters from the previous progress
line make the output look weird [1]. term_clear_line() makes this
quite simple, as it covers up the entire last line either by using an
ANSI control sequence or by printing a terminal width worth of space
characters, depending on whether the terminal is smart or dumb.
Unfortunately, when accessing a remote repository via any non-local
protocol the remote 'git receive-pack/upload-pack' processes can't
possibly have any idea about the local terminal (smart of dumb? how
wide?) their progress will end up on. Consequently, they assume the
worst, i.e. standard-width dumb terminal, and print 80 spaces to cover
up the previously displayed progress line. The local 'git
clone/fetch/push' processes then display the remote's progress,
including these coverup spaces, with the 'remote: ' prefix, resulting
in a total line length of 88 characters. If the local terminal is
narrower than that, then the coverup gets line-wrapped, and after that
the CR at the end doesn't return to the beginning of the progress
line, but to the first column of its last line, resulting in those
repeated 'remote: <many-spaces>' lines.
By reverting 5b12e3123b (progress: use term_clear_line(),
2019-06-24) we won't cover up the entire last line, but go back to
comparing the length of the current progress bar line with the
previous one, and cover up as many characters as needed.
[1] See commits 545dc345eb (progress: break too long progress bar
lines, 2019-04-12) and 9f1fd84e15 (progress: clear previous
progress update dynamically, 2019-04-12).
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Reject values that don't fit into an int, as get_parent() and
get_nth_ancestor() cannot handle them. That's better than potentially
returning a random object.
If this restriction turns out to be too tight then we can switch to a
wider data type, but we'd still have to check for overflow.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If the number gets too high for an int, weird things may happen, as
signed overflows are undefined. Add a test to show this; rev-parse
"sucessfully" interprets 100000000000000000000000000000000 to be the
same as 0, at least on x64 with GCC 9.2.1 and Clang 8.0.1, which is
obviously bogus.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The sparse:oid filter has two error modes: we might fail to resolve the
name to an OID, or we might fail to parse the contents of that OID. In
the latter case, let's give a less generic error message, and mention
the OID we did find.
While we're here, let's also mark both messages as translatable.
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The list-objects-filter code has two steps to its initialization:
1. parse_list_objects_filter() makes sure the spec is a filter we know
about and is syntactically correct. This step is done by "rev-list"
or "upload-pack" that is going to apply a filter, but also by "git
clone" or "git fetch" before they send the spec across the wire.
2. list_objects_filter__init() runs the type-specific initialization
(using function pointers established in step 1). This happens at
the start of traverse_commit_list_filtered(), when we're about to
actually use the filter.
It's a good idea to parse as much as we can in step 1, in order to catch
problems early (e.g., a blob size limit that isn't a number). But one
thing we _shouldn't_ do is resolve any oids at that step (e.g., for
sparse-file contents specified by oid). In the case of a fetch, the oid
has to be resolved on the remote side.
The current code does resolve the oid during the parse phase, but
ignores any error (which we must do, because we might just be sending
the spec across the wire). This leads to two bugs:
- if we're not in a repository (e.g., because it's git-clone parsing
the spec), then we trigger a BUG() trying to resolve the name
- if we did hit the error case, we still have to notice that later and
bail. The code path in rev-list handles this, but the one in
upload-pack does not, leading to a segfault.
We can fix both by moving the oid resolution into the sparse-oid init
function. At that point we know we have a repository (because we're
about to traverse), and handling the error there fixes the segfault.
As a bonus, we can drop the NULL sparse_oid_value check in rev-list,
since this is now handled in the sparse-oid-filter init function.
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We test in t5317 that "sparse:oid" filters work with rev-list, but
there's no coverage at all confirming that they work with a fetch or
clone (and in fact, there are several bugs). Let's do a basic test that
a clone fetches the correct objects.
[jk: extracted from Jon's earlier fix patches. I also simplified the
setup down to a single sparse file, and I added checks that we got the
right blobs]
Signed-off-by: Jon Simons <jon@jonsimons.org>
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When the client has asked for certain shallow options like
"deepen-since", we do a custom rev-list walk that pretends to be
shallow. Before doing so, we have to disable the commit-graph, since it
is not compatible with the shallow view of the repository. That's
handled by 829a321569 (commit-graph: close_commit_graph before shallow
walk, 2018-08-20). That commit literally closes and frees our
repo->objects->commit_graph struct.
That creates an interesting problem for commits that have _already_ been
parsed using the commit graph. Their commit->object.parsed flag is set,
their commit->graph_pos is set, but their commit->maybe_tree may still
be NULL. When somebody later calls repo_get_commit_tree(), we see that
we haven't loaded the tree oid yet and try to get it from the commit
graph. But since it has been freed, we segfault!
So the root of the issue is a data dependency between the commit's
lazy-load of the tree oid and the fact that the commit graph can go
away mid-process. How can we resolve it?
There are a couple of general approaches:
1. The obvious answer is to avoid loading the tree from the graph when
we see that it's NULL. But then what do we return for the tree oid?
If we return NULL, our caller in do_traverse() will rightly
complain that we have no tree. We'd have to fallback to loading the
actual commit object and re-parsing it. That requires teaching
parse_commit_buffer() to understand re-parsing (i.e., not starting
from a clean slate and not leaking any allocated bits like parent
list pointers).
2. When we close the commit graph, walk through the set of in-memory
objects and clear any graph_pos pointers. But this means we also
have to "unparse" any such commits so that we know they still need
to open the commit object to fill in their trees. So it's no less
complicated than (1), and is more expensive (since we clear objects
we might not later need).
3. Stop freeing the commit-graph struct. Continue to let it be used
for lazy-loads of tree oids, but let upload-pack specify that it
shouldn't be used for further commit parsing.
4. Push the whole shallow rev-list out to its own sub-process, with
the commit-graph disabled from the start, giving it a clean memory
space to work from.
I've chosen (3) here. Options (1) and (2) would work, but are
non-trivial to implement. Option (4) is more expensive, and I'm not sure
how complicated it is (shelling out for the actual rev-list part is
easy, but we do then parse the resulting commits internally, and I'm not
clear which parts need to be handling shallow-ness).
The new test in t5500 triggers this segfault, but see the comments there
for how horribly intimate it has to be with how both upload-pack and
commit graphs work.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit b841d4ff43 (Add `human` format to test-tool, 2019-01-28) added
a get_time() function which allows $GIT_TEST_DATE_NOW in the
environment to override the current time. So we no longer need to
interpret that variable in cmd__date().
Therefore, we can stop passing the "now" parameter down through the
date functions, since nobody uses them. Note that we do need to make
sure all of the previous callers that took a "now" parameter are
correctly using get_time().
Signed-off-by: Stephen P. Smith <ischis2@cox.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>