When using sparse checkout, clear_skip_worktree_from_present_files() must
enumerate index entries to find ones with the SKIP_WORKTREE bit to
determine whether those index entries exist on disk (in which case their
SKIP_WORKTREE bit should be removed).
In a large repository, this may take considerable time depending on the
size of the index.
Add a trace2 region to surface this information, keeping a count of how
many paths have been checked. Separately, keep counts after a full index is
materialized.
Signed-off-by: Anh Le <anh@canva.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Test script to verify the presence/absence of files, paths, directories,
symlinks and other features in 'git mv' command are using the command
format:
'test (-e|f|d|h|...)'
Replace them with helper functions of format:
'test_path_is_*'
Replacing idiomatic helper functions:
'! test_path_is_*'
with
'test_path_is_missing'
This uses values of 'test_path_bar' in place of '! test_path_foo' to
bring in the helpful factor of indicating the failure of tests after the
mv command has been used, that is, it echoes if the feature/test_path
exists.
Signed-off-by: Debra Obondo <debraobondo@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Various tests exercising the transfer.credentialsInUrl configuration
are taught to avoid making requests which require resolving localhost
to reduce CI-flakiness.
* jk/avoid-localhost:
t5516/t5601: be less strict about the number of credential warnings
t5516: move plaintext-password tests from t5601 and t5516
This commit fixes a bug when parsing tags that have CRLF line endings, a
signature, and no body, like this (the "^M" are marking the CRs):
this is the subject^M
-----BEGIN PGP SIGNATURE-----^M
^M
...some stuff...^M
-----END PGP SIGNATURE-----^M
When trying to find the start of the body, we look for a blank line
separating the subject and body. In this case, there isn't one. But we
search for it using strstr(), which will find the blank line in the
signature.
In the non-CRLF code path, we check whether the line we found is past
the start of the signature, and if so, put the body pointer at the start
of the signature (effectively making the body empty). But the CRLF code
path doesn't catch the same case, and we end up with the body pointer in
the middle of the signature field. This has two visible problems:
- printing %(contents:subject) will show part of the signature, too,
since the subject length is computed as (body - subject)
- the length of the body is (sig - body), which makes it negative.
Asking for %(contents:body) causes us to cast this to a very large
size_t when we feed it to xmemdupz(), which then complains about
trying to allocate too much memory.
These are essentially the same bugs fixed in the previous commit, except
that they happen when there is a CRLF blank line in the signature,
rather than no blank line at all. Both are caused by the refactoring in
9f75ce3d8f (ref-filter: handle CRLF at end-of-line more gracefully,
2020-10-29).
We can fix this by doing the same "sigstart" check that we do in the
non-CRLF case. And rather than repeat ourselves, we can just use
short-circuiting OR to collapse both cases into a single conditional.
I.e., rather than:
if (strstr("\n\n"))
...found blank, check if it's in signature...
else if (strstr("\r\n\r\n"))
...found blank, check if it's in signature...
else
...no blank line found...
we can collapse this to:
if (strstr("\n\n")) ||
strstr("\r\n\r\n")))
...found blank, check if it's in signature...
else
...no blank line found...
The tests show the problem and the fix. Though it wasn't broken, I
included contents:signature here to make sure it still behaves as
expected, but note the shell hackery needed to make it work. A
less-clever option would be to skip using test_atom and just "append_cr
>expected" ourselves.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
When ref-filter is asked to show %(content:subject), etc, we end up in
find_subpos() to parse out the three major parts: the subject, the body,
and the signature (if any).
When searching for the blank line between the subject and body, if we
don't find anything, we try to treat the whole message as the subject,
with no body. But our idea of "the whole message" needs to take into
account the signature, too. Since 9f75ce3d8f (ref-filter: handle CRLF at
end-of-line more gracefully, 2020-10-29), the code instead goes all the
way to the end of the buffer, which produces confusing output.
Here's an example. If we have a tag message like this:
this is the subject
-----BEGIN SSH SIGNATURE-----
...some stuff...
-----END SSH SIGNATURE-----
then the current parser will put the start of the body at the end of the
whole buffer. This produces two buggy outcomes:
- since the subject length is computed as (body - subject), showing
%(contents:subject) will print both the subject and the signature,
rather than just the single line
- since the body length is computed as (sig - body), and the body now
starts _after_ the signature, we end up with a negative length!
Fortunately we never access out-of-bounds memory, because the
negative length is fed to xmemdupz(), which casts it to a size_t,
and xmalloc() bails trying to allocate an absurdly large value.
In theory it would be possible for somebody making a malicious tag
to wrap it around to a more reasonable value, but it would require a
tag on the order of 2^63 bytes. And even if they did, all they get
is an out of bounds string read. So the security implications are
probably not interesting.
We can fix both by correctly putting the start of the body at the same
index as the start of the signature (effectively making the body empty).
Note that this is a real issue with signatures generated with gpg.format
set to "ssh", which would look like the example above. In the new tests
here I use a hard-coded tag message, for a few reasons:
- regardless of what the ssh-signing code produces now or in the
future, we should be testing this particular case
- skipping the actual signature makes the tests simpler to write (and
allows them to run on more systems)
- t6300 has helpers for working with gpg signatures; for the purposes
of this bug, "BEGIN PGP" is just as good a demonstration, and this
simplifies the tests
Curiously, the same issue doesn't happen with real gpg signatures (and
there are even existing tests in t6300 with cover this). Those have a
blank line between the header and the content, like:
this is the subject
-----BEGIN PGP SIGNATURE-----
...some stuff...
-----END PGP SIGNATURE-----
Because we search for the subject/body separator line with a strstr(),
we find the blank line in the signature, even though it's outside of
what we'd consider the body. But that puts us unto a separate code path,
which realizes that we're now in the signature and adjusts the line back
to "sigstart". So this patch is basically just making the "no line found
at all" case match that. And note that "sigstart" is always defined (if
there is no signature, it points to the end of the buffer as you'd
expect).
Reported-by: Martin Englund <martin@englund.nu>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Add a rather trivial "spatchcache", with this running e.g.:
make cocciclean
make contrib/coccinelle/free.cocci.patch \
SPATCH=contrib/coccicheck/spatchcache \
SPATCH_FLAGS=--very-quiet
Is cut down from ~20s to ~5s on my system. Much of that is either
fixable shell overhead, or the around 40 files we "CANTCACHE" (see the
implementation).
This uses "redis" as a cache by default, but it's configurable. See
the embedded documentation.
This is *not* like ccache in that we won't cache failed spatch
invocations, or those where spatch suggests changes for us. Those
cases are so rare that I didn't think it was worth the bother, by far
the most common case is that it has no suggested changes. We'll also
refuse to cache any "spatch" invocation that has output on stderr,
which means that "--very-quiet" must be added to "SPATCH_FLAGS".
Because we narrow the cache to that we don't need to save away stdout,
stderr & the exit code. We simply cache the cases where we had no
suggested changes.
Another benchmark is to compare this with the previous
SPATCH_BATCH_SIZE=N, as noted in [1]. Before this (on my 8 core system) running:
make clean; time make contrib/coccinelle/array.cocci.patch SPATCH_BATCH_SIZE=0
Would take 33s, but with the preceding changes running without this
"spatchcache" is slightly slower, or around 35s:
make clean; time make contrib/coccinelle/array.cocci.patch
Now doing the same with SPATCH=contrib/coccinelle/spatchcache will
take around 6s, but we'll need to compile the *.o files first to take
full advantage of it (which can be fast with "ccache"):
make clean; make; time make contrib/coccinelle/array.cocci.patch SPATCH=contrib/coccinelle/spatchcache
1. https://lore.kernel.org/git/YwdRqP1CyUAzCEn2@coredump.intra.peff.net/
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
The preceding commits to make the "coccicheck" target incremental made
it slower in some cases. As an optimization let's not have the
many=many mapping of <*.cocci>=<*.[ch]>, but instead concat the
<*.cocci> into an ALL.cocci, and then run one-to-many
ALL.cocci=<*.[ch]>.
A "make coccicheck" is now around 2x as fast as it was on "master",
and around 1.5x as fast as the preceding change to make the run
incremental:
$ git hyperfine -L rev origin/master,HEAD~,HEAD -p 'make clean' 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' -r 3
Benchmark 1: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'origin/master
Time (mean ± σ): 4.258 s ± 0.015 s [User: 27.432 s, System: 1.532 s]
Range (min … max): 4.241 s … 4.268 s 3 runs
Benchmark 2: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD~
Time (mean ± σ): 5.365 s ± 0.079 s [User: 36.899 s, System: 1.810 s]
Range (min … max): 5.281 s … 5.436 s 3 runs
Benchmark 3: make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD
Time (mean ± σ): 2.725 s ± 0.063 s [User: 14.796 s, System: 0.233 s]
Range (min … max): 2.667 s … 2.792 s 3 runs
Summary
'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD' ran
1.56 ± 0.04 times faster than 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'origin/master'
1.97 ± 0.05 times faster than 'make coccicheck SPATCH=spatch COCCI_SOURCES="$(echo $(ls o*.c builtin/h*.c))"' in 'HEAD~'
This can be turned off with SPATCH_CONCAT_COCCI, but as the
beneficiaries of "SPATCH_CONCAT_COCCI=" would mainly be those
developing the *.cocci rules themselves, let's leave this optimization
on by default.
For more information see my "Optimizing *.cocci rules by concat'ing
them" (<220901.8635dbjfko.gmgdl@evledraar.gmail.com>) on the
cocci@inria.fr mailing list.
This potentially changes the results of our *.cocci rules, but as
noted in that discussion it should be safe for our use. We don't name
rules, or if we do their names don't conflict across our *.cocci
files.
To the extent that we'd have any inter-dependencies between rules this
doesn't make that worse, as we'd have them now if we ran "make
coccicheck", applied the results, and would then have (due to
hypothetical interdependencies) suggested changes on the subsequent
"make coccicheck".
Our "coccicheck-test" target makes use of the ALL.cocci when running
tests, e.g. when testing unused.{c,out} we test it against ALL.cocci,
not unused.cocci. We thus assert (to the extent that we have test
coverage) that this concatenation doesn't change the expected results
of running these rules.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
The <id> in the <rulename> part of the coccinelle syntax[1] is for our
purposes there to declares if we have inter-dependencies between
different rules.
But such <id>'s must be unique within a given semantic patch file. As
we'll be processing a concatenated version of our rules in the
subsequent commit let's remove these names. They weren't being used
for the semantic patches themselves, and equated to a short comment
about the rule.
Both the filename and context of the rules makes it clear what they're
doing, so we're not gaining anything from keeping these. Retaining
them goes against recommendations that "contrib/coccinelle/README"
will be making in the subsequent commit.
This leaves only one named rule in our sources, where it's needed for
a "<id> <-> <extends> <id>" relationship:
$ git -P grep '^@ ' -- contrib/coccinelle/
contrib/coccinelle/swap.cocci:@ swap @
contrib/coccinelle/swap.cocci:@ extends swap @
1. https://coccinelle.gitlabpages.inria.fr/website/docs/main_grammar.html
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Change the "coccinelle" rule so that we first copy the *.cocci source
in e.g. "contrib/coccinelle/strbuf.cocci" to
".build/contrib/coccinelle/strbuf.cocci" before operating on it.
For now this serves as a rather pointless indirection, but prepares us
for the subsequent commit where we'll be able to inject generated
*.cocci files. Having the entire dependency tree live inside .build/*
simplifies both the globbing we'd need to do, and any "clean" rules.
It will also help for future targets which will want to act on the
generated patches or the logs, e.g. targets to alert if we can't parse
certain files (or, less so than usual) with "spatch", and e.g. a
replacement for "ci/run-static-analysis.sh". Such a replacement won't
care about placing the patches in the in-tree, only whether they're
"OK" (and about the diff).
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Improve the incremental rebuilding support of "coccicheck" by
piggy-backing on the computed dependency information of the
corresponding *.o file, rather than rebuilding all <RULE>/<FILE> pairs
if either their corresponding file changes, or if any header changes.
This in effect uses the same method that the "sparse" target was made
to use in c234e8a0ec (Makefile: make the "sparse" target non-.PHONY,
2021-09-23), except that the dependency on the *.o file isn't a hard
one, we check with $(wildcard) if the *.o file exists, and if so we'll
depend on it.
This means that the common case of:
make
make coccicheck
Will benefit from incremental rebuilding, now changing e.g. a header
will only re-run "spatch" on those those *.c files that make use of
it:
By depending on the *.o we piggy-back on
COMPUTE_HEADER_DEPENDENCIES. See c234e8a0ec (Makefile: make the
"sparse" target non-.PHONY, 2021-09-23) for prior art of doing that
for the *.sp files. E.g.:
make contrib/coccinelle/free.cocci.patch
make -W column.h contrib/coccinelle/free.cocci.patch
Will take around 15 seconds for the second command on my 8 core box if
I didn't run "make" beforehand to create the *.o files. But around 2
seconds if I did and we have those "*.o" files.
Notes about the approach of piggy-backing on *.o for dependencies:
* It *is* a trade-off since we'll pay the extra cost of running the C
compiler, but we're probably doing that anyway. The compiler is much
faster than "spatch", so even though we need to re-compile the *.o to
create the dependency info for the *.c for "spatch" it's
faster (especially if using "ccache").
* There *are* use-cases where some would like to have *.o files
around, but to have the "make coccicheck" ignore them. See:
https://lore.kernel.org/git/20220826104312.GJ1735@szeder.dev/
For those users a:
make
make coccicheck SPATCH_USE_O_DEPENDENCIES=
Will avoid considering the *.o files.
* If that *.o file doesn't exist we'll depend on an intermediate file
of ours which in turn depends on $(FOUND_H_SOURCES).
This covers both an initial build, or where "coccicheck" is run
without running "all" beforehand, and because we run "coccicheck"
on e.g. files in compat/* that we don't know how to build unless
the requisite flag was provided to the Makefile.
Most of the runtime of "incremental" runs is now spent on various
compat/* files, i.e. we conditionally add files to COMPAT_OBJS, and
therefore conflate whether we *can* compile an object and generate
dependency information for it with whether we'd like to link it
into our binary.
Before this change the distinction didn't matter, but now one way
to make this even faster on incremental builds would be to peel
those concerns apart so that we can see that e.g. compat/mmap.c
doesn't depend on column.h.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Optimize the very slow "coccicheck" target to take advantage of
incremental rebuilding, and fix outstanding dependency problems with
the existing rule.
The rule is now faster both on the initial run as we can make better
use of GNU make's parallelism than the old ad-hoc combination of
make's parallelism combined with $(SPATCH_BATCH_SIZE) and/or the
"--jobs" argument to "spatch(1)".
It also makes us *much* faster when incrementally building, it's now
viable to "make coccicheck" as topic branches are merged down.
The rule didn't use FORCE (or its equivalents) before, so a:
make coccicheck
make coccicheck
Would report nothing to do on the second iteration. But all of our
patch output depended on all $(COCCI_SOURCES) files, therefore e.g.:
make -W grep.c coccicheck
Would do a full re-run, i.e. a a change in a single file would force
us to do a full re-run.
The reason for this (not the initial rationale, but my analysis) is:
* Since we create a single "*.cocci.patch+" we don't know where to
pick up where we left off, or how to incrementally merge e.g. a
"grep.c" change with an existing *.cocci.patch.
* We've been carrying forward the dependency on the *.c files since
63f0a758a0 (add coccicheck make target, 2016-09-15) the rule was
initially added as a sort of poor man's dependency discovery.
As we don't include other *.c files depending on other *.c files
has always been broken, as could be trivially demonstrated
e.g. with:
make coccicheck
make -W strbuf.h coccicheck
However, depending on the corresponding *.c files has been doing
something, namely that *if* an API change modified both *.c and *.h
files we'd catch the change to the *.h we care about via the *.c
being changed.
For API changes that happened only via *.h files we'd do the wrong
thing before this change, but e.g. for function additions (not
"static inline" ones) catch the *.h change by proxy.
Now we'll instead:
* Create a <RULE>/<FILE> pair in the .build directory, E.g. for
swap.cocci and grep.c we'll create
.build/contrib/coccinelle/swap.cocci.patch/grep.c.
That file is the diff we'll apply for that <RULE>-<FILE>
combination, if there's no changes to me made (the common case)
it'll be an empty file.
* Our generated *.patch
file (e.g. contrib/coccinelle/swap.cocci.patch) is now a simple "cat
$^" of all of all of the <RULE>/<FILE> files for a given <RULE>.
In the case discussed above of "grep.c" being changed we'll do the
full "cat" every time, so they resulting *.cocci.patch will always
be correct and up-to-date, even if it's "incrementally updated".
See 1cc0425a27 (Makefile: have "make pot" not "reset --hard",
2022-05-26) for another recent rule that used that technique.
As before we'll:
* End up generating a contrib/coccinelle/swap.cocci.patch, if we
"fail" by creating a non-empty patch we'll still exit with a zero
exit code.
Arguably we should move to a more Makefile-native way of doing
this, i.e. fail early, and if we want all of the "failed" changes
we can use "make -k", but as the current
"ci/run-static-analysis.sh" expects us to behave this way let's
keep the existing behavior of exhaustively discovering all cocci
changes, and only failing if spatch itself errors out.
Further implementation details & notes:
* Before this change running "make coccicheck" would by default end
up pegging just one CPU at the very end for a while, usually as
we'd finish whichever *.cocci rule was the most expensive.
This could be mitigated by combining "make -jN" with
SPATCH_BATCH_SIZE, see 960154b9c1 (coccicheck: optionally batch
spatch invocations, 2019-05-06).
There will be cases where getting rid of "SPATCH_BATCH_SIZE" makes
things worse, but a from-scratch "make coccicheck" with the default
of SPATCH_BATCH_SIZE=1 (and tweaking it doesn't make a difference)
is faster (~3m36s v.s. ~3m56s) with this approach, as we can feed
the CPU more work in a less staggered way.
* Getting rid of "SPATCH_BATCH_SIZE" particularly helps in cases
where the default of 1 yields parallelism under "make coccicheck",
but then running e.g.:
make -W contrib/coccinelle/swap.cocci coccicheck
I.e. before that would use only one CPU core, until the user
remembered to adjust "SPATCH_BATCH_SIZE" differently than the
setting that makes sense when doing a non-incremental run of "make
coccicheck".
* Before the "make coccicheck" rule would have to clean
"contrib/coccinelle/*.cocci.patch*", since we'd create "*+" and
"*.log" files there. Now those are created in
.build/contrib/coccinelle/, which is covered by the "cocciclean" rule
already.
Outstanding issues & future work:
* We could get rid of "--all-includes" in favor of manually
specifying a list of includes to give to "spatch(1)".
As noted upthread of [1] a naïve removal of "--all-includes" will
result in broken *.cocci patches, but if we know the exhaustive
list of includes via COMPUTE_HEADER_DEPENDENCIES we don't need to
re-scan for them, we could grab the headers to include from the
.depend.d/<file>.o.d and supply them with the "--include" option to
spatch(1).q
1. https://lore.kernel.org/git/87ft18tcog.fsf@evledraar.gmail.com/
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Per the rationale in 7b63ea5750 (Makefile: remove mandatory "spatch"
arguments from SPATCH_FLAGS, 2022-07-05) we have certain flags that
are truly mandatory, such as "--sp-file" and "--patch .". The
"--all-includes" flag is also critical, but per [1] we might want to
ad-hoc tweak it occasionally for testing or one-offs.
But being unable to set e.g. SPATCH_FLAGS="--verbose-parsing" without
breaking how our "spatch" works isn't ideal, i.e. before this we'd
need to know about the default include flags, and specify:
SPATCH_FLAGS="--all-includes --verbose-parsing".
If we were then to change the default include flag (e.g. to
"--recursive-includes") in the future any such one-off commands would
need to be correspondingly updated.
Let's instead leave the SPATCH_FLAGS for the user, while creating a
new SPATCH_INCLUDE_FLAGS to allow for ad-hoc testing of the include
strategy itself.
1. https://lore.kernel.org/git/20220823095733.58685-1-szeder.dev@gmail.com/
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Amend the "coccicheck-test" rule added in f7ff6597a7 (cocci: add a
"coccicheck-test" target and test *.cocci rules, 2022-07-05) to stop
using "--all-includes". The flags we'll need for the tests are
different than the ones we'll need for our main source code.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Split off the "; setting[...]" part of the comment added in In
960154b9c1 (coccicheck: optionally batch spatch invocations,
2019-05-06), and restore what we had before that, which was a comment
indicating that variables for the "coccicheck" target were being set
here.
When 960154b9c1 amended the heading to discuss SPATCH_BATCH_SIZE it
left no natural place to add a new comment about other flags that
preceded it. As subsequent commits will add such comments we need to
split the existing comment up.
The wrapping for the "SPATCH_BATCH_SIZE" is now a bit odd, but
minimizes the diff size. As a subsequent commit will remove that
feature altogether this is worth it.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Fix an issue with the "coccicheck" family of rules that's been here
since 63f0a758a0 (add coccicheck make target, 2016-09-15), unlike
e.g. "make grep.o" we wouldn't re-run it when $(SPATCH) or
$(SPATCH_FLAGS) changed. To test new flags we needed to first do a
"make cocciclean".
This now uses the same (copy/pasted) pattern as other "DEFINES"
rules. As a result we'll re-run properly. This can be demonstrated
e.g. on the issue noted in [1]:
$ make contrib/coccinelle/xcalloc.cocci.patch COCCI_SOURCES=promisor-remote.c V=1
[...]
SPATCH contrib/coccinelle/xcalloc.cocci
$ make contrib/coccinelle/xcalloc.cocci.patch COCCI_SOURCES=promisor-remote.c SPATCH_FLAGS="--all-includes --recursive-includes"
* new spatch flags
SPATCH contrib/coccinelle/xcalloc.cocci
SPATCH result: contrib/coccinelle/xcalloc.cocci.patch
$
1. https://lore.kernel.org/git/20220823095602.GC1735@szeder.dev/
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Declare the contrib/coccinelle/<rule>.cocci.patch rules in such a way
as to allow TAB-completion, and slightly optimize the Makefile by
cutting down on the number of $(wildcard) in favor of defining
"coccicheck" and "coccicheck-pending" in terms of the same
incrementally filtered list.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Fix an issue with a rule added in 9b45f49981 (object-store: prepare
has_{sha1, object}_file to handle any repo, 2018-11-13). We've been
spewing out this warning into our $@.log since that rule was added:
warning: rule starting on line 21: metavariable F not used in the - or context code
We should do a better job of scouring our coccinelle log files for
such issues, but for now let's fix this as a one-off.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
In f7ff6597a7 (cocci: add a "coccicheck-test" target and test *.cocci
rules, 2022-07-05) we abbreviated "_TEST" to "_T" to have it align
with the rest of the "="'s above it.
Subsequent commits will add more QUIET_SPATCH_* variables, so let's
stop abbreviating this, and indent it in preparation for adding more
of these variables.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Use diff_free_queue() instead of open-coding it. This shortens the
code and make it less repetitive.
Note that the second hunk in diff_flush() is interesting, because the
'free_queue' label separates the loop freeing the queue's filepairs
from free()-ing the queue's internal array. This is somewhat
suspicious, but it was not an issue before: there is only one place
from where we jump to this label with a goto, and that is protected by
an 'if (!q->nr && ...)' condition, i.e. we only skipped the loop
freeing the filepairs when there were no filepairs in the queue to
begin with.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
When processing merge commits, the line-level log first creates an
array of diff queues, each comparing the merge commit with one of its
parents, to check whether any of the files in the given line ranges
were modified. Alas, when freeing these queues it only frees the
filepairs in the queues, but not the queues' internal arrays holding
pointers to those filepairs.
Use the diff_free_queue() helper function introduced in the previous
commit to free the diff queues' internal arrays as well.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
When processing a non-merge commit, the line-level log first asks the
tree-diff machinery whether any of the files in the given line ranges
were modified between the current commit and its parent, and if some
of them were, then it loads the contents of those files from both
commits to see whether their line ranges were modified and/or need to
be adjusted. Alas, it doesn't free() the diff queue holding the
results of that query and the contents of those files once its done.
This can add up to a substantial amount of leaked memory, especially
when the file in question is big and is frequently modified: a user
reported "Out of memory, malloc failed" errors with a 2MB text file
that was modified ~2800 times [1] (I estimate the leak would use up
almost 11GB memory in that case).
Free that diff queue to plug this memory leak. However, instead of
simply open-coding the necessary three lines, add them as a helper
function to the diff API, because it will be useful elsewhere as well.
[1] https://public-inbox.org/git/CAFOPqVXz2XwzX8vGU7wLuqb2ZuwTuOFAzBLRM_QPk+NJa=eC-g@mail.gmail.com/
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
It is unclear as to _why_, but under certain circumstances the warning
about credentials being passed as part of the URL seems to be swallowed
by the `git remote-https` helper in the Windows jobs of Git's CI builds.
Since it is not actually important how many times Git prints the
warning/error message, as long as it prints it at least once, let's just
make the test a bit more lenient and test for the latter instead of the
former, which works around these CI issues.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Commit 6dcbdc0d66 (remote: create fetch.credentialsInUrl config,
2022-06-06) added tests for our handling of passwords in URLs. Since the
obvious URL to be affected is git-over-http, the tests use http. However
they don't set up a test server; they just try to access
https://localhost, assuming it will fail (because the nothing is
listening there).
This causes some possible problems:
- There might be a web server running on localhost, and we do not
actually want to connect to that.
- The DNS resolver, or the local firewall, might take a substantial
amount of time (or forever, whichever comes first) to fail to
connect, slowing down the tests cases unnecessarily.
- Since there's no server, our tests for "allow" and "warn" still
expect the clone/fetch/push operations to fail, even though in the
real world we'd expect these to succeed. We scrape stderr to see
what happened, but it's not as robust as a more realistic test.
Let's instead move these to t5551, which is all about testing http and
where we have a real server. That eliminates any issues with contacting
a strange URL, and lets the "allow" and "warn" tests confirm that the
operation actually succeeds.
It's not quite a verbatim move for a few reasons:
- we can drop the LIBCURL dependency; it's already part of
lib-httpd.sh
- we'll use HTTPD_URL_USER_PASS, etc, instead of our fake URL. To
avoid repetition, we'll add a few extra variables.
- the "https://username:@localhost" test uses a funny URL that
lib-httpd.sh doesn't provide. We'll similarly construct it in a
variable. Note that we're hard-coding the lib-httpd username here,
but t5551 already does that everywhere.
- for the "domain:port" test, the URL provided by lib-httpd is fine,
since our test server will always be on an exotic port. But we'll
confirm in the test that this is so.
- since our message-matching is done via grep, I simplified it to use
a regex, rather than trying to massage lib-httpd's variables.
Arguably this makes it more readable, too, while retaining the bits
we care about: the fatal/warning distinction, the "uses plaintext"
message, and the fact that the password was redacted.
- we'll use the /auth/ path for the repo, which shows that we are
indeed making use of the auth information when needed.
- we'll also use /smart/; most of these tests could be done via /dumb/
in t5550, but setting up pushes there requires extra effort and
dependencies. The smart protocol is what most everyone is using
these days anyway.
This patch is my own, but I stole the analysis and a few bits of the
commit message from a patch by Johannes Schindelin.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
`test_path_is_missing` was introduced back in 2caf20c52b ("test-lib:
user-friendly alternatives to test [-d|-f|-e]", 2010-08-10). It took the
path that was supposed to be missing, as well as an optional "diagnosis"
that would be echoed if the path was found to be alive.
Commit 45a2686441 ("test-lib-functions: remove bug-inducing
"diagnostics" helper param", 2021-02-12) dropped this diagnostic
functionality from several `test_path_is_foo` helpers, but note how it
tweaked the README entry on `test_path_is_missing` without actually
adjusting its implementation.
Commit e7884b353b ("test-lib-functions: assert correct parameter count",
2021-02-12) then followed up by asserting that we get just a single
argument.
This history leaves us in a state where we assert that we have exactly
one argument, then go on to anyway check for arguments, echoing them
all. It's clear that we can simplify this code. We should also note that
we run `ls -ld "$1"`, so printing the filename a second time doesn't
really buy us anything. Thus, we can drop the whole `if` block as
redundant.
Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
In a similar spirit as the previous commit, the 'seen' branch gets
rebuilt by reintegrating topics between 'jch' and the (old) tip of
'seen'.
Update the instructions on how to generate Meta/redo-seen.sh for the
first time to reflect this.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Rebuilding the 'jch' branch begins by reintegrating any topics between
'master' and 'jch', not 'master' and 'seen'.
In the maintainer guide, the documentation isn't quite right, since the
initial input to Meta/Reintegrate is "master..seen", not "master..jch".
This can lead to confusing results when generating the Meta/redo-jch.sh
script for the first time.
Additionally, rebuilding 'jch' takes place in two steps. First, running
the script up to the first "### match next" cut-line, and then comparing
the result with what's on 'next' (i.e. with "git diff jch next"). Then,
the remaining set of topics get merged down to 'jch' (which aren't on
'next') by running the entire "redo-jch.sh" script.
Clarify the documentation to reflect this.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Extend the tests added in c553c72eed (run-command: add an
asynchronous parallel child processor, 2015-12-15) to test stdout in
addition to stderr.
When the "ungroup" feature was added in fd3aaf53f7 (run-command: add
an "ungroup" option to run_process_parallel(), 2022-06-07) its tests
were made to test both the stdout and stderr, but these existing tests
were left alone. Let's also exhaustively test our expected output
here.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Fix test patterns added in 62104ba14a (submodules: allow parallel
fetching, add tests and documentation, 2015-12-15) and
a028a1930c (fetching submodules: respect `submodule.fetchJobs` config
option, 2016-02-29).
In the former case we were leaving a trace.out file at the top-level
for any subsequent tests (there are none, currently). Let's clean the
file up instead.
In the latter case we were testing that a given configuration would
result in "N tasks" in the log, but we were grepping through the log
for all previous such tests, when we really meant to clear the logs
between the "grep" invocations.
In practice this resulted in no logic error, as e.g. "--fetch 7" would
not print out a "9 tasks" line, but let's be paranoid and stop
implicitly assuming that that's the case.
This change was originally left out of 51243f9f0f (run-command API:
don't fall back on online_cpus(), 2022-10-12), which added the
">trace.out" seen at the end of the context.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
The tests added in 96e7225b31 (hook: add 'run' subcommand,
2021-12-22) were redirecting to "actual" both in the body of the hook
itself and in the testing code below.
The net result was that the "2>>actual" redirection later in the test
wasn't doing anything. Let's have those redirection do what it looks
like they're doing.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Rewrite a deep recursion in the skipping negotiator to use a loop
with on-heap prio queue to avoid stack wastage.
* jt/skipping-negotiator-wo-recursion:
negotiator/skipping: avoid stack overflow
"git merge-tree --stdin" is a new way to request a series of merges
and report the merge results.
* en/merge-tree-sequence:
merge-tree: support multiple batched merges with --stdin
merge-tree: update documentation for differences in -z output
Define the logical elements of a "bundle list", data structure to
store them in-core, format to transfer them, and code to parse
them.
* ds/bundle-uri-3:
bundle-uri: suppress stderr from remote-https
bundle-uri: quiet failed unbundlings
bundle: add flags to verify_bundle()
bundle-uri: fetch a list of bundles
bundle: properly clear all revision flags
bundle-uri: limit recursion depth for bundle lists
bundle-uri: parse bundle list in config format
bundle-uri: unit test "key=value" parsing
bundle-uri: create "key=value" line parsing
bundle-uri: create base key-value pair parsing
bundle-uri: create bundle_list struct and helpers
bundle-uri: use plain string in find_temp_filename()
"git branch --edit-description" can exit with status -1 which is
not a good practice; it learned to use 1 as everybody else instead.
* rj/branch-do-not-exit-with-minus-one-status:
branch: error code with --edit-description
The role the security mailing list plays in an embargoed release
has been documented.
* jr/embargoed-releases-doc:
embargoed releases: also describe the git-security list and the process
Merging a branch with directory renames into a branch that changes
the directory to a symlink was mishandled by the ort merge
strategy, which has been corrected.
* en/ort-dir-rename-and-symlink-fix:
merge-ort: fix bug with dir rename vs change dir to symlink
A bugfix to "git subtree" in its split and merge features.
* pb/subtree-split-and-merge-after-squashing-tag-fix:
subtree: fix split after annotated tag was squashed merged
subtree: fix squash merging after annotated tag was squashed merged
subtree: process 'git-subtree-split' trailer in separate function
subtree: use named variables instead of "$@" in cmd_pull
subtree: define a variable before its first use in 'find_latest_squash'
subtree: prefix die messages with 'fatal'
subtree: add 'die_incompatible_opt' function to reduce duplication
subtree: use 'git rev-parse --verify [--quiet]' for better error messages
test-lib-functions: mark 'test_commit' variables as 'local'
Fix some bugs in the reflog messages when rebasing and changes the
reflog messages of "rebase --apply" to match "rebase --merge" with
the aim of making the reflog easier to parse.
* pw/rebase-reflog-fixes:
rebase: cleanup action handling
rebase --abort: improve reflog message
rebase --apply: make reflog messages match rebase --merge
rebase --apply: respect GIT_REFLOG_ACTION
rebase --merge: fix reflog message after skipping
rebase --merge: fix reflog when continuing
t3406: rework rebase reflog tests
rebase --apply: remove duplicated code
"git rebase --keep-base" used to discard the commits that are
already cherry-picked to the upstream, even when "keep-base" meant
that the base, on top of which the history is being rebuilt, does
not yet include these cherry-picked commits. The --keep-base
option now implies --reapply-cherry-picks and --no-fork-point
options.
* pw/rebase-keep-base-fixes:
rebase --keep-base: imply --no-fork-point
rebase --keep-base: imply --reapply-cherry-picks
rebase: factor out branch_base calculation
rebase: rename merge_base to branch_base
rebase: store orig_head as a commit
rebase: be stricter when reading state files containing oids
t3416: set $EDITOR in subshell
t3416: tighten two tests
Two new facilities, "timer" and "counter", are introduced to the
trace2 API.
* jh/trace2-timers-and-counters:
trace2: add global counter mechanism
trace2: add stopwatch timers
trace2: convert ctx.thread_name from strbuf to pointer
trace2: improve thread-name documentation in the thread-context
trace2: rename the thread_name argument to trace2_thread_start
api-trace2.txt: elminate section describing the public trace2 API
tr2tls: clarify TLS terminology
trace2: use size_t alloc,nr_open_regions in tr2tls_thread_ctx
"git shortlog" learned to group by the "format" string.
* tb/shortlog-group:
shortlog: implement `--group=committer` in terms of `--group=<format>`
shortlog: implement `--group=author` in terms of `--group=<format>`
shortlog: extract `shortlog_finish_setup()`
shortlog: support arbitrary commit format `--group`s
shortlog: extract `--group` fragment for translation
shortlog: make trailer insertion a noop when appropriate
shortlog: accept `--date`-related options
Code simplification by using strvec_pushf() instead of building an
argument in a separate strbuf.
* rs/absorb-git-dir-simplify:
submodule: use strvec_pushf() for --super-prefix
The way "git repack" creared temporary files when it received a
signal was prone to deadlocking, which has been corrected.
* jk/repack-tempfile-cleanup:
t7700: annotate cruft-pack failure with ok=sigpipe
repack: drop remove_temporary_files()
repack: use tempfiles for signal cleanup
repack: expand error message for missing pack files
repack: populate extension bits incrementally
repack: convert "names" util bitfield to array
Make sure generated dependency file is stably sorted to help
developers debugging their build issues.
* sg/stable-docdep:
Documentation/build-docdep.perl: generate sorted output
A new "--include-whitespace" option is added to "git patch-id", and
existing bugs in the internal patch-id logic that did not match
what "git patch-id" produces have been corrected.
* jz/patch-id:
builtin: patch-id: remove unused diff-tree prefix
builtin: patch-id: add --verbatim as a command mode
patch-id: fix patch-id for mode changes
builtin: patch-id: fix patch-id with binary diffs
patch-id: use stable patch-id for rebases
patch-id: fix stable patch id for binary / header-only