Crash fix for a corner case where an error codepath tried to unlock
what it did not acquire lock on.
* mr/packed-ref-store-fix:
files_initial_transaction_commit(): only unlock if locked
The http tracing code, often used to debug connection issues,
learned to redact potentially sensitive information from its output
so that it can be more safely sharable.
* jt/http-redact-cookies:
http: support omitting data from traces
http: support cookie redaction when tracing
The tracing machinery learned to report tweaking of environment
variables as well.
* nd/trace-with-env:
run-command.c: print new cwd in trace_run_command()
run-command.c: print env vars in trace_run_command()
run-command.c: print program 'git' when tracing git_cmd mode
run-command.c: introduce trace_run_command()
trace.c: move strbuf_release() out of print_trace_line()
trace: avoid unnecessary quoting
sq_quote_argv: drop maxlen parameter
Rewrite two more "git submodule" subcommands in C.
* pc/submodule-helper:
submodule: port submodule subcommand 'deinit' from shell to C
submodule: port submodule subcommand 'sync' from shell to C
Avoid showing a warning message in the middle of a line of "git
diff" output.
* nd/diff-flush-before-warning:
diff.c: flush stdout before printing rename warnings
Doc updates.
* ks/submodule-doc-updates:
Doc/git-submodule: improve readability and grammar of a sentence
Doc/gitsubmodules: make some changes to improve readability and syntax
Retire mru API as it does not give enough abstraction over
underlying list API to be worth it.
* gs/retire-mru:
mru: Replace mru.[ch] with list.h implementation
The first step to getting rid of mru API and using the
doubly-linked list API directly instead.
* ot/mru-on-list:
mru: use double-linked list from list.h
The machinery to clone & fetch, which in turn involves packing and
unpacking objects, have been told how to omit certain objects using
the filtering mechanism introduced by the jh/object-filtering
topic, and also mark the resulting pack as a promisor pack to
tolerate missing objects, taking advantage of the mechanism
introduced by the jh/fsck-promisors topic.
* jh/partial-clone:
t5616: test bulk prefetch after partial fetch
fetch: inherit filter-spec from partial clone
t5616: end-to-end tests for partial clone
fetch-pack: restore save_commit_buffer after use
unpack-trees: batch fetching of missing blobs
clone: partial clone
partial-clone: define partial clone settings in config
fetch: support filters
fetch: refactor calculation of remote list
fetch-pack: test support excluding large blobs
fetch-pack: add --no-filter
fetch-pack, index-pack, transport: partial clone
upload-pack: add object filtering for partial clone
In preparation for implementing narrow/partial clone, the machinery
for checking object connectivity used by gc and fsck has been
taught that a missing object is OK when it is referenced by a
packfile specially marked as coming from trusted repository that
promises to make them available on-demand and lazily.
* jh/fsck-promisors:
gc: do not repack promisor packfiles
rev-list: support termination at promisor objects
sha1_file: support lazily fetching missing objects
introduce fetch-object: fetch one promisor object
index-pack: refactor writing of .keep files
fsck: support promisor objects as CLI argument
fsck: support referenced promisor objects
fsck: support refs pointing to promisor objects
fsck: introduce partialclone extension
extension.partialclone: introduce partial clone extension
The build procedure for perl/ part has been greatly simplified by
weaning ourselves off of MakeMaker.
* ab/simplify-perl-makefile:
perl: treat PERLLIB_EXTRA as an extra path again
perl: avoid *.pmc and fix Error.pm further
Makefile: replace perl/Makefile.PL with simple make rules
If the user presses a key that isn't currently active then explain why
it isn't active rather than just listing all the keys. It already did
this for some keys, this patch does the same for the those that
weren't already handled.
Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If there is only a single hunk then disable searching as there is
nothing to search for. Also print a specific error message if the user
tries to search with '/' when there's only a single hunk rather than
just listing the key bindings.
Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If the user presses a key that add -p wasn't expecting then it prints
a list of key bindings. Although the prompt only lists the active
bindings the help was printed for all bindings. Fix this by using the
list of keys in the prompt to filter the help. Note that the list of
keys was already passed to help_patch_cmd() by the caller so there is
no change needed to the call site.
Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Small changes in messages to fit the style and typography of rest.
Reuse already translated messages if possible.
Do not translate messages aimed at developers of git.
Fix unit tests depending on the original string.
Use `test_i18ngrep` for tests with translatable strings.
Change and verify rest of tests via `make GETTEXT_POISON=1 test`.
Signed-off-by: Alexander Shopov <ash@kambanaria.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
'git for-each-ref' should error out when invoked with more than one
quoting style options. The tests checking this have two issues:
- They run 'git for-each-ref' upstream of a pipe, hiding its exit
code, thus don't actually checking that 'git for-each-ref' exits
with error code.
- They check the error message in a rather roundabout way.
Ensure that 'git for-each-ref' exits with an error code using the
'test_must_fail' helper function, and check its error message by
grepping its saved standard error.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add documentation explaining the functions in color.h.
While at it, migrate the function `color_set` into grep.c,
where the only callers are.
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the description of git interpret-trailers, we describe "a group…of
lines" that have certain characteristics. Ensure both options
describing this group use a singular verb for parallelism.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The new command `git rebase --show-current-patch` is useful for seeing
the commit related to the current rebase state. Some however may find
the "git show" command behind it too limiting. You may want to
increase context lines, do a diff that ignores whitespaces...
For these advanced use cases, the user can execute any command they
want with the new pseudo ref REBASE_HEAD.
This also helps show where the stopped commit is from, which is hard
to see from the previous patch which implements --show-current-patch.
Helped-by: Tim Landscheidt <tim@tim-landscheidt.de>
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It is useful to see the full patch while resolving conflicts in a
rebase. The only way to do it now is
less .git/rebase-*/patch
which could turn out to be a lot longer to type if you are in a
linked worktree, or not at top-dir. On top of that, an ordinary user
should not need to peek into .git directory. The new option is
provided to examine the patch.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Pointing the user to $GIT_DIR/rebase-apply may encourage them to mess
around in there, which is not a good thing. With this, the user does
not have to keep the path around somewhere (because after a couple of
commands, the path may be out of scrollback buffer) when they need to
look at the patch.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git worktree remove" basically consists of two things
- delete $GIT_WORK_TREE
- delete $GIT_DIR (which is $SUPER_GIT_DIR/worktrees/something)
If $GIT_WORK_TREE is already gone for some reason, we should be able
to finish the job by deleting $GIT_DIR.
Two notes:
- $GIT_WORK_TREE _can_ be missing if the worktree is locked. In that
case we must not delete $GIT_DIR because the real $GIT_WORK_TREE may
be in a usb stick somewhere. This is already handled because we
check for lock first.
- validate_worktree() is still called because it may do more checks in
future (and it already does something else, like checking main
worktree, but that's irrelevant in this case)
Noticed-by: Kaartic Sivaraam <kaartic.sivaraam@gmail.com>
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This command allows to delete a worktree. Like 'move' you cannot
remove the main worktree, or one with submodules inside [1].
For deleting $GIT_WORK_TREE, Untracked files or any staged entries are
considered precious and therefore prevent removal by default. Ignored
files are not precious.
When it comes to deleting $GIT_DIR, there's no "clean" check because
there should not be any valuable data in there, except:
- HEAD reflog. There is nothing we can do about this until somebody
steps up and implements the ref graveyard.
- Detached HEAD. Technically it can still be recovered. Although it
may be nice to warn about orphan commits like 'git checkout' does.
[1] We do 'git status' with --ignore-submodules=all for safety
anyway. But this needs a closer look by submodule people before we
can allow deletion. For example, if a submodule is totally clean,
but its repo not absorbed to the main .git dir, then deleting
worktree also deletes the valuable .submodule repo too.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Submodules contains .git files with relative paths. After a worktree
move, these files need to be updated or they may point to nowhere.
This is a bandage patch to make sure "worktree move" don't break
people's worktrees by accident. When .git file update code is in
place, this validate_no_submodules() could be removed.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Similar to "mv a b/", which is actually "mv a b/a", we extract basename
of source worktree and create a directory of the same name at
destination if dst path is a directory.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This command allows to relocate linked worktrees. Main worktree cannot
(yet) be moved.
There are two options to move the main worktree, but both have
complications, so it's not implemented yet. Anyway the options are:
- convert the main worktree to a linked one and move it away, leave
the git repository where it is. The repo essentially becomes bare
after this move.
- move the repository with the main worktree. The tricky part is make
sure all file descriptors to the repository are closed, or it may
fail on Windows.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In check_ignore(), the first pathspec item determines the dtype for any
subsequent ones. That means that a pathspec matching a regular file can
prevent following pathspecs from matching directories, which makes no
sense. Fix that by determining the dtype for each pathspec separately,
by passing the value DT_UNKNOWN to last_exclude_matching() each time.
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When the batch size is neither configured nor given on the command
line, but the relogin delay is given, then the current code ignores
the relogin delay setting.
This is unsafe as there was some intention when setting the batch size.
One workaround would be to just assume a batch size of 1 as a default.
This however may be bad UX, as then the user may wonder why it is sending
slowly without apparent batching.
Error out for now instead of potentially confusing the user.
As 5453b83bdf (send-email: --batch-size to work around some SMTP
server limit, 2017-05-21) lays out, we rather want to not have this
interface anyway and would rather want to react on the server throttling
dynamically.
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Prior to 644eb60bd0 (builtin/describe.c: describe a blob,
2017-11-15), we noticed and complained about missing
objects, since they were not valid commits:
$ git describe 0000000000000000000000000000000000000000
fatal: 0000000000000000000000000000000000000000 is not a valid 'commit' object
After that commit, we feed any non-commit to lookup_blob(),
and complain only if it returns NULL. But the lookup_*
functions do not actually look at the on-disk object
database at all. They return an entry from the in-memory
object hash if present (and if it matches the requested
type), and otherwise auto-create a "struct object" of the
requested type.
A missing object would hit that latter case: we create a
bogus blob struct, walk all of history looking for it, and
then exit successfully having produced no output.
One reason nobody may have noticed this is that some related
cases do still work OK:
1. If we ask for a tree by sha1, then the call to
lookup_commit_referecne_gently() would have parsed it,
and we would have its true type in the in-memory object
hash.
2. If we ask for a name that doesn't exist but isn't a
40-hex sha1, then get_oid() would complain before we
even look at the objects at all.
We can fix this by replacing the lookup_blob() call with a
check of the true type via sha1_object_info(). This is not
quite as efficient as we could possibly make this check. We
know in most cases that the object was already parsed in the
earlier commit lookup, so we could call lookup_object(),
which does auto-create, and check the resulting struct's
type (or NULL). However it's not worth the fragility nor
code complexity to save a single object lookup.
The new tests cover this case, as well as that of a
tree-by-sha1 (which does work as described above, but was
not explicitly tested).
Noticed-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Sparse has, for a long time, been issuing the following warning against
the pack-revindex.c file:
SP pack-revindex.c
pack-revindex.c:64:23: warning: memset with byte count of 262144
This results from a unconditional check, with a hard-coded limit, which
is really only appropriate for the kernel source code. (The check is for
a 'large' byte count in a call to memcpy(), memset(), copy_from_user()
and copy_to_user() functions).
A recent release of sparse (v0.5.1) has introduced some options to allow
this check to be turned off (-Wno-memcpy-max-count) or to specify the
actual limit used (-fmemcpy-max-count=COUNT), rather than a hard-coded
limit of 100000.
In order to suppress the warning, add a target for pack-revindex.sp that
adds the '-Wno-memcpy-max-count' option to the SPARSE_FLAGS variable.
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since commit f66450ae9 ("cygwin: Remove the Win32 l/stat() implementation",
2013-06-22), the cygwin build has not used the WIN32 API/header files.
This means that the '-isystem /usr/include/w32api' option to sparse is
no longer necessary (to allow sparse to find the WIN32 header files).
In addition, the '-Wno-one-bit-signed-bitfield' option can be removed,
since the warning suppressed by that option was only provoked by a WIN32
header file.
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Attempting to grep the output of test_i18ngrep will not work under a
poison build, since the output is (almost) guaranteed not to have the
string you are looking for. In this case, we have a test_i18ngrep call
which attempts to filter the contents of a file, which was itself the
result of a call to test_i18ngrep. In this case, we can achieve the
same effect with a single call to test_i18ngrep (without creating the
intermediate file), since the second regular expression can be used
without change to filter the original input.
Also, replace a call to test_i18ncmp with test_cmp, since the content
being compared is not subject to i18n anyway.
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This ancient test script does a lot of manual checking of
test conditions with "if" blocks. We can simplify this
by relying on helpers like test_must_fail.
Note that a failing "grep" call here won't produce any
verbose output, but that's OK. These days we rely on "-x" to
tell us about such commands. And in addition, these greps
are soon to be converted to test_i18ngrep (which is itself
soon learning to be more verbose).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since 'test_might_fail' is implemented as a thin wrapper around
'test_must_fail', it also accepts the same options. Mention this in
the docs as well.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Note the caveat where 2.17 is stricter about index validation
potentially causing "could not open directory" warnings when git is
upgraded. See the preceding "dir.c: stop ignoring opendir() error in
open_cached_dir()" change.
This caused some mayhem when I upgraded git to a version with this
series at Booking.com, and other users have doubtless enabled the UC
extension and are in for a surprise when they upgrade. Let's give them
a headsup in the docs.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Document the bug tested for in my "status: add a failing test showing
a core.untrackedCache bug" and fixed in Duy's "dir.c: fix missing dir
invalidation in untracked code".
Since this is very likely something others will encounter in the
future on older versions, and it's not obvious how to fix it let's
document both that it exists, and how to "fix" it with a one-off
command.
As noted in that commit, even though this bug gets the untracked cache
into a bad state, we have not yet found a case where this is user
visible, and thus it makes sense for these docs to focus on the
symlink case only.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Make the new --prune-tags option work properly when git-fetch is
invoked with a <url> parameter instead of a <remote name>
parameter.
This change is split off from the introduction of --prune-tags due to
the relative complexity of munging the incoming argv, which is easier
to review as a separate change.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a --prune-tags option to git-fetch, along with fetch.pruneTags
config option and a -P shorthand (-p is --prune). This allows for
doing any of:
git fetch -p -P
git fetch --prune --prune-tags
git fetch -p -P origin
git fetch --prune --prune-tags origin
Or simply:
git config fetch.prune true &&
git config fetch.pruneTags true &&
git fetch
Instead of the much more verbose:
git fetch --prune origin 'refs/tags/*:refs/tags/*' '+refs/heads/*:refs/remotes/origin/*'
Before this feature it was painful to support the use-case of pulling
from a repo which is having both its branches *and* tags deleted
regularly, and have our local references to reflect upstream.
At work we create deployment tags in the repo for each rollout, and
there's *lots* of those, so they're archived within weeks for
performance reasons.
Without this change it's hard to centrally configure such repos in
/etc/gitconfig (on servers that are only used for working with
them). You need to set fetch.prune=true globally, and then for each
repo:
git -C {} config --replace-all remote.origin.fetch "refs/tags/*:refs/tags/*" "^\+*refs/tags/\*:refs/tags/\*$"
Now I can simply set fetch.pruneTags=true in /etc/gitconfig as well,
and users running "git pull" will automatically get the pruning
semantics I want.
Even though "git remote" has corresponding "prune" and "update
--prune" subcommands I'm intentionally not adding a corresponding
prune-tags or "update --prune --prune-tags" mode to that command.
It's advertised (as noted in my recent "git remote doc: correct
dangerous lies about what prune does") as only modifying remote
tracking references, whereas any --prune-tags option is always going
to modify what from the user's perspective is a local copy of the tag,
since there's no such thing as a remote tracking tag.
Ideally add_prune_tags_to_fetch_refspec() would be something that
would use ALLOC_GROW() to grow the 'fetch` member of the 'remote'
struct. Instead I'm realloc-ing remote->fetch and adding the
tag_refspec to the end.
The reason is that parse_{fetch,push}_refspec which allocate the
refspec (ultimately remote->fetch) struct are called many places that
don't have access to a 'remote' struct. It would be hard to change all
their callsites to be amenable to carry around the bookkeeping
variables required for dynamic allocation.
All the other callers of the API first incrementally construct the
string version of the refspec in remote->fetch_refspec via
add_fetch_refspec(), before finally calling parse_fetch_refspec() via
some variation of remote_get().
It's less of a pain to deal with the one special case that needs to
modify already constructed refspecs than to chase down and change all
the other callsites. The API I'm adding is intentionally not
generalized because if we add more of these we'd probably want to
re-visit how this is done.
See my "Re: [BUG] git remote prune removes local tags, depending on
fetch config" (87po6ahx87.fsf@evledraar.gmail.com;
https://public-inbox.org/git/87po6ahx87.fsf@evledraar.gmail.com/) for
more background info.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The fetch.pruneTags configuration doesn't exist yet, but will be added
in a subsequent commit. Since testing for it requires adding new
parameters to the test_configured_prune function it's easier to review
this patch first to assert that no functional changes are introduced
yet.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Amend the documentation for fetch.prune, fetch.<name>.prune and
--prune to link to the recently added PRUNING section.
I'd have liked to link directly to it with "<<PRUNING>>" from
fetch-options.txt, since it's included in git-fetch.txt (git-pull.txt
also includes it, but doesn't include that option). However making a
reference across files yields this error:
[...]/Documentation/git-fetch.xml:226: element xref: validity
error : IDREF attribute linkend references an unknown ID "PRUNING"
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The "git remote prune <name>" command uses the same machinery as "git
fetch <name> --prune", and shares all the same caveats, but its
documentation has suggested that it'll just "delete stale
remote-tracking branches under <name>".
This isn't true, and hasn't been true since at least v1.8.5.6 (the
oldest version I could be bothered to test).
E.g. if "refs/tags/*:refs/tags/*" is explicitly set in the refspec of
the remote, it'll delete all local tags <name> doesn't know about.
Instead, briefly give the reader just enough of a hint that this
option might constitute a shotgun aimed at their foot, and point them
to the new PRUNING section in the git-fetch documentation which
explains all the nuances of what this facility does.
See "[BUG] git remote prune removes local tags, depending on fetch
config" (CACi5S_39wNrbfjLfn0xhCY+uewtFN2YmnAcRc86z6pjUTjWPHQ@mail.gmail.com)
by Michael Giuffrida for the initial report.
Reported-by: Michael Giuffrida <michaelpg@chromium.org>
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a new section to canonically explain how remote reference pruning
works, and how users should be careful about using it in conjunction
with tag refspecs in particular.
A subsequent commit will update the git-remote documentation to refer
to this section, and details the motivation for writing this in the
first place.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When a remote URL is supplied on the command-line the internals of the
fetch are different, in particular the code in get_ref_map(). An
earlier version of the subsequent fetch.pruneTags patch hid a segfault
because the difference wasn't tested for.
Now all the tests are run as both of the variants of:
git fetch
git -c [...] fetch $(git config remote.origin.url) $(git config remote.origin.fetch)
I'm using -c because while the [fetch] config just set by
set_config_tristate will be picked up, the remote.origin.* config
won't override it as intended.
Work around that and turn this into a purely command-line test by
always setting the variables on the command-line, and translate any
setting of remote.origin.X into fetch.X.
The reason for choosing the names "name" and "link" as opposed to
e.g. "named" and "url" is because they're the same length, which makes
the test output easier to read as it will be aligned.
Due to shellscript quoting madness it's not worthwhile to do all of
this within a test_expect_success, but do the parts that can easily be
done there, including the one-time setting of variables that don't
change between runs to be used by subsequent runs in the 'prune_type
setup' test.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>