"git push" and "git clone" learned to give better progress meters
to the end user who is waiting on the terminal.
* jk/push-progress:
receive-pack: send keepalives during quiet periods
receive-pack: turn on connectivity progress
receive-pack: relay connectivity errors to sideband
receive-pack: turn on index-pack resolving progress
index-pack: add flag for showing delta-resolution progress
clone: use a real progress meter for connectivity check
check_connected: add progress flag
check_connected: relay errors to alternate descriptor
check_everything_connected: use a struct with named options
check_everything_connected: convert to argv_array
rev-list: add optional progress reporting
check_everything_connected: always pass --quiet to rev-list
The API to iterate over all the refs (i.e. for_each_ref(), etc.)
has been revamped.
* mh/ref-iterators:
for_each_reflog(): reimplement using iterators
dir_iterator: new API for iterating over a directory tree
for_each_reflog(): don't abort for bad references
do_for_each_ref(): reimplement using reference iteration
refs: introduce an iterator interface
ref_resolves_to_object(): new function
entry_resolves_to_object(): rename function from ref_resolves_to_object()
get_ref_cache(): only create an instance if there is a submodule
remote rm: handle symbolic refs correctly
delete_refs(): add a flags argument
refs: use name "prefix" consistently
do_for_each_ref(): move docstring to the header file
refs: remove unnecessary "extern" keywords
The number of variants of check_everything_connected has
grown over the years, so that the "real" function takes
several possibly-zero, possibly-NULL arguments. We hid the
complexity behind some wrapper functions, but this doesn't
scale well when we want to add new options.
If we add more wrapper variants to handle the new options,
then we can get a combinatorial explosion when those options
might be used together (right now nobody wants to use both
"shallow" and "transport" together, so we get by with just a
few wrappers).
If instead we add new parameters to each function, each of
which can have a default value, then callers who want the
defaults end up with confusing invocations like:
check_everything_connected(fn, 0, data, -1, 0, NULL);
where it is unclear which parameter is which (and every
caller needs updated when we add new options).
Instead, let's add a struct to hold all of the optional
parameters. This is a little more verbose for the callers
(who have to declare the struct and fill it in), but it
makes their code much easier to follow, because every option
is named as it is set (and unused options do not have to be
mentioned at all).
Note that we could also stick the iteration function and its
callback data into the option struct, too. But since those
are required for each call, by avoiding doing so, we can let
very simple callers just pass "NULL" for the options and not
worry about the struct at all.
While we're touching each site, let's also rename the
function to check_connected(). The existing name was quite
long, and not all of the wrappers even used the full name.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Improve the look of the way "git fetch" reports what happened to
each ref that was fetched.
* nd/fetch-ref-summary:
fetch: reduce duplicate in ref update status lines with placeholder
fetch: align all "remote -> local" output
fetch: change flag code for displaying tag update and deleted ref
fetch: refactor ref update status formatting code
git-fetch.txt: document fetch output
The ownership rule for the piece of memory that hold references to
be fetched in "git fetch" was screwy, which has been cleaned up.
* km/fetch-do-not-free-remote-name:
builtin/fetch.c: don't free remote->name after fetch
The ownership rule for the piece of memory that hold references to
be fetched in "git fetch" was screwy, which has been cleaned up.
* km/fetch-do-not-free-remote-name:
builtin/fetch.c: don't free remote->name after fetch
In the "remote -> local" line, if either ref is a substring of the
other, the common part in the other string is replaced with "*". For
example
abc -> origin/abc
refs/pull/123/head -> pull/123
become
abc -> origin/*
refs/*/head -> pull/123
Activated with fetch.output=compact.
For the record, this output is not perfect. A single giant ref can
push all refs very far to the right and likely be wrapped around. We
may have a few options:
- exclude these long lines smarter
- break the line after "->", exclude it from column width calculation
- implement a new format, { -> origin/}foo, which makes the problem
go away at the cost of a bit harder to read
- reverse all the arrows so we have "* <- looong-ref", again still
hard to read.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We do align "remote -> local" output by allocating 10 columns to
"remote". That produces aligned output only for short refs. An extra
pass is performed to find the longest remote ref name (that does not
produce a line longer than terminal width) to produce better aligned
output.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This makes the fetch flag code consistent with push, where '-' means
deleted ref.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This makes it easier to change the formatting later. And it makes sure
translators cannot mess up format specifiers and break Git.
There are a couple call sites where the length of the second column is
TRANSPORT_SUMMARY_WIDTH instead of calculated by TRANSPORT_SUMMARY(),
which is enforced now. The result should be the same because these call
sites do not contain characters outside ASCII range.
The two strbuf_addf() calls instead of one is mostly to reduce
diff-noise in a future patch where "ref -> ref" is reformatted
differently.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This will be useful for passing REF_NODEREF through.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Make fetch's string_list of remote names own all of its string items
(strdup'ing when necessary) so that it can deallocate them safely
when clearing.
Signed-off-by: Keith McGuigan <kmcguigan@twopensource.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In git-fetch, --depth argument is always relative with the latest
remote refs. This makes it a bit difficult to cover this use case,
where the user wants to make the shallow history, say 3 levels
deeper. It would work if remote refs have not moved yet, but nobody
can guarantee that, especially when that use case is performed a
couple months after the last clone or "git fetch --depth". Also,
modifying shallow boundary using --depth does not work well with
clones created by --since or --not.
This patch fixes that. A new argument --deepen=<N> will add <N> more (*)
parent commits to the current history regardless of where remote refs
are.
Have/Want negotiation is still respected. So if remote refs move, the
server will send two chunks: one between "have" and "want" and another
to extend shallow history. In theory, the client could send no "want"s
in order to get the second chunk only. But the protocol does not allow
that. Either you send no want lines, which means ls-remote; or you
have to send at least one want line that carries deep-relative to the
server..
The main work was done by Dongcan Jiang. I fixed it up here and there.
And of course all the bugs belong to me.
(*) We could even support --deepen=<N> where <N> is negative. In that
case we can cut some history from the shallow clone. This operation
(and --depth=<shorter depth>) does not require interaction with remote
side (and more complicated to implement as a result).
Helped-by: Duy Nguyen <pclouds@gmail.com>
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Dongcan Jiang <dongcan.jiang@gmail.com>
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The code for warning_errno/die_errno has been refactored and a new
error_errno() reporting helper is introduced.
* nd/error-errno: (41 commits)
wrapper.c: use warning_errno()
vcs-svn: use error_errno()
upload-pack.c: use error_errno()
unpack-trees.c: use error_errno()
transport-helper.c: use error_errno()
sha1_file.c: use {error,die,warning}_errno()
server-info.c: use error_errno()
sequencer.c: use error_errno()
run-command.c: use error_errno()
rerere.c: use error_errno() and warning_errno()
reachable.c: use error_errno()
mailmap.c: use error_errno()
ident.c: use warning_errno()
http.c: use error_errno() and warning_errno()
grep.c: use error_errno()
gpg-interface.c: use error_errno()
fast-import.c: use error_errno()
entry.c: use error_errno()
editor.c: use error_errno()
diff-no-index.c: use error_errno()
...
A couple of newlines are also removed, because both error() and
error_errno() automatically append a newline.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A major part of "git submodule update" has been ported to C to take
advantage of the recently added framework to run download tasks in
parallel.
* sb/submodule-parallel-update:
clone: allow an explicit argument for parallel submodule clones
submodule update: expose parallelism to the user
submodule helper: remove double 'fatal: ' prefix
git submodule update: have a dedicated helper for cloning
run_processes_parallel: rename parameters for the callbacks
run_processes_parallel: treat output of children as byte array
submodule update: direct error message to stderr
fetching submodules: respect `submodule.fetchJobs` config option
submodule-config: drop check against NULL
submodule-config: keep update strategy around
This allows to configure fetching and updating in parallel
without having the command line option.
This moved the responsibility to determine how many parallel processes
to start from builtin/fetch to submodule.c as we need a way to communicate
"The user did not specify the number of parallel processes in the command
line options" in the builtin fetch. The submodule code takes care of
the precedence (CLI > config > default).
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Update various codepaths to avoid manually-counted malloc().
* jk/tighten-alloc: (22 commits)
ewah: convert to REALLOC_ARRAY, etc
convert ewah/bitmap code to use xmalloc
diff_populate_gitlink: use a strbuf
transport_anonymize_url: use xstrfmt
git-compat-util: drop mempcpy compat code
sequencer: simplify memory allocation of get_message
test-path-utils: fix normalize_path_copy output buffer size
fetch-pack: simplify add_sought_entry
fast-import: simplify allocation in start_packfile
write_untracked_extension: use FLEX_ALLOC helper
prepare_{git,shell}_cmd: use argv_array
use st_add and st_mult for allocation size computation
convert trivial cases to FLEX_ARRAY macros
use xmallocz to avoid size arithmetic
convert trivial cases to ALLOC_ARRAY
convert manual allocations to argv_array
argv-array: add detach function
add helpers for allocating flex-array structs
harden REALLOC_ARRAY and xcalloc against size_t overflow
tree-diff: catch integer overflow in combine_diff_path allocation
...
The internal API to interact with "remote.*" configuration
variables has been streamlined.
* tg/git-remote:
remote: use remote_is_configured() for add and rename
remote: actually check if remote exits
remote: simplify remote_is_configured()
remote: use parse_config_key
"git fetch" and friends that make network connections can now be
told to only use ipv4 (or ipv6).
* ew/force-ipv4:
connect & http: support -4 and -6 switches for remote operations
If our size computation overflows size_t, we may allocate a
much smaller buffer than we expected and overflow it. It's
probably impossible to trigger an overflow in most of these
sites in practice, but it is easy enough convert their
additions and multiplications into overflow-checking
variants. This may be fixing real bugs, and it makes
auditing the code easier.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The remote_is_configured() function allows checking whether a remote
exists or not. The function however only works if remote_get() wasn't
called before calling it. In addition, it only checks the configuration
for remotes, but not remotes or branches files.
Make use of the origin member of struct remote instead, which indicates
where the remote comes from. It will be set to some value if the remote
is configured in any file in the repository, but is initialized to 0 if
the remote is only created in make_remote().
Signed-off-by: Thomas Gummerer <t.gummerer@gmail.com>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Sometimes it is necessary to force IPv4-only or IPv6-only operation
on networks where name lookups may return a non-routable address and
stall remote operations.
The ssh(1) command has an equivalent switches which we may pass when
we run them. There may be old ssh(1) implementations out there
which do not support these switches; they should report the
appropriate error in that case.
rsync support is untouched for now since it is deprecated and
scheduled to be removed.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Reviewed-by: Torsten Bögershausen <tboegi@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Many codepaths that run "gc --auto" before exiting kept packfiles
mapped and left the file descriptors to them open, which was not
friendly to systems that cannot remove files that are open. They
now close the packs before doing so.
* js/close-packs-before-gc:
receive-pack: release pack files before garbage-collecting
merge: release pack files before garbage-collecting
am: release pack files before garbage-collecting
fetch: release pack files before garbage-collecting
Some codepaths used fopen(3) when opening a fixed path in $GIT_DIR
(e.g. COMMIT_EDITMSG) that is meant to be left after the command is
done. This however did not work well if the repository is set to
be shared with core.sharedRepository and the umask of the previous
user is tighter. They have been made to work better by calling
unlink(2) and retrying after fopen(3) fails with EPERM.
* js/fopen-harder:
Handle more file writes correctly in shared repos
commit: allow editing the commit message even in shared repos
Many codepaths that run "gc --auto" before exiting kept packfiles
mapped and left the file descriptors to them open, which was not
friendly to systems that cannot remove files that are open. They
now close the packs before doing so.
* js/close-packs-before-gc:
receive-pack: release pack files before garbage-collecting
merge: release pack files before garbage-collecting
am: release pack files before garbage-collecting
fetch: release pack files before garbage-collecting
Some codepaths used fopen(3) when opening a fixed path in $GIT_DIR
(e.g. COMMIT_EDITMSG) that is meant to be left after the command is
done. This however did not work well if the repository is set to
be shared with core.sharedRepository and the umask of the previous
user is tighter. They have been made to work better by calling
unlink(2) and retrying after fopen(3) fails with EPERM.
* js/fopen-harder:
Handle more file writes correctly in shared repos
commit: allow editing the commit message even in shared repos
Before auto-gc'ing, we need to make sure that the pack files are
released in case they need to be repacked and garbage-collected.
This fixes https://github.com/git-for-windows/git/issues/500
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In shared repositories, we have to be careful when writing files whose
permissions do not allow users other than the owner to write them.
In particular, we force the marks file of fast-export and the FETCH_HEAD
when fetching to be rewritten from scratch.
This commit does not touch other calls to fopen() that want to
write files:
- commands that write to working tree files (core.sharedRepository
does not affect permission bits of working tree files),
e.g. .rej file created by "apply --reject", result of applying a
previous conflict resolution by "rerere", "git merge-file".
- git am, when splitting mails (git-am correctly cleans up its directory
after finishing, so there is no need to share those files between users)
- git submodule clone, when writing the .git file, because the file
will not be overwritten
- git_terminal_prompt() in compat/terminal.c, because it is not writing to
a file at all
- git diff --output, because the output file is clearly not intended to be
shared between the users of the current repository
- git fast-import, when writing a crash report, because the reports' file
names are unique due to an embedded process ID
- mailinfo() in mailinfo.c, because the output is clearly not intended to
be shared between the users of the current repository
- check_or_regenerate_marks() in remote-testsvn.c, because this is only
used for Git's internal testing
- git fsck, when writing lost&found blobs (this should probably be
changed, but left as a low-hanging fruit for future contributors).
Note that this patch does not touch callers of write_file() and
write_file_gently(), which would benefit from the same scrutiny as
to usage in shared repositories. Most notable users are branch,
daemon, submodule & worktree, and a worrisome call in transport.c
when updating one ref (which ignores the shared flag).
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Convert all instances of get_object_hash to use an appropriate reference
to the hash member of the oid member of struct object. This provides no
functional change, as it is essentially a macro substitution.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Jeff King <peff@peff.net>
Convert most instances where the sha1 member of struct object is
dereferenced to use get_object_hash. Most instances that are passed to
functions that have versions taking struct object_id, such as
get_sha1_hex/get_oid_hex, or instances that can be trivially converted
to use struct object_id instead, are not converted.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Jeff King <peff@peff.net>
Use struct object_id in three fields in struct ref and convert all the
necessary places that use it.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Jeff King <peff@peff.net>
This saves us some manual computation, and eliminates a call
to strcpy.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We generate range strings like "1234abcd...5678efab" for use
in the the fetch and push status tables. We use fixed-size
buffers along with strcat to do so. These aren't buggy, as
our manual size computation is correct, but there's nothing
checking that this is so. Let's switch them to strbufs
instead, which are obviously correct, and make it easier to
audit the code base for problematic calls to strcat().
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We parse the INFINITE_DEPTH constant into a static,
fixed-size buffer using sprintf. This buffer is sufficiently
large for the current constant, but it's a suspicious
pattern, as the constant is defined far away, and it's not
immediately obvious that 12 bytes are large enough to hold
it.
We can just use xstrfmt here, which gets rid of any question
of the buffer size. It also removes any concerns with object
lifetime, which means we do not have to wonder why this
buffer deep within a conditional is marked "static" (we
never free our newly allocated result, of course, but that's
OK; it's global that lasts the lifetime of the whole program
anyway).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The gitmodules API accessed from the C code learned to cache stuff
lazily.
* hv/submodule-config:
submodule: allow erroneous values for the fetchRecurseSubmodules option
submodule: use new config API for worktree configurations
submodule: extract functions for config set and lookup
submodule: implement a config API for lookup of .gitmodules values
git_path() and mkpath() are handy helper functions but it is easy
to misuse, as the callers need to be careful to keep the number of
active results below 4. Their uses have been reduced.
* jk/git-path:
memoize common git-path "constant" files
get_repo_path: refactor path-allocation
find_hook: keep our own static buffer
refs.c: remove_empty_directories can take a strbuf
refs.c: avoid git_path assignment in lock_ref_sha1_basic
refs.c: avoid repeated git_path calls in rename_tmp_log
refs.c: simplify strbufs in reflog setup and writing
path.c: drop git_path_submodule
refs.c: remove extra git_path calls from read_loose_refs
remote.c: drop extraneous local variable from migrate_file
prefer mkpathdup to mkpath in assignments
prefer git_pathdup to git_path in some possibly-dangerous cases
add_to_alternates_file: don't add duplicate entries
t5700: modernize style
cache.h: complete set of git_path_submodule helpers
cache.h: clarify documentation for git_path, et al
We should not die when reading the submodule config cache since the
user might not be able to get out of that situation when the
configuration is part of the history.
We should handle this condition later when the value is about to be
used.
Signed-off-by: Heiko Voigt <hvoigt@hvoigt.net>
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
An off-by-one error made "git remote" to mishandle a remote with a
single letter nickname.
* mh/get-remote-group-fix:
get_remote_group(): use skip_prefix()
get_remote_group(): eliminate superfluous call to strcspn()
get_remote_group(): rename local variable "space" to "wordlen"
get_remote_group(): handle remotes with single-character names
One of the most common uses of git_path() is to pass a
constant, like git_path("MERGE_MSG"). This has two
drawbacks:
1. The return value is a static buffer, and the lifetime
is dependent on other calls to git_path, etc.
2. There's no compile-time checking of the pathname. This
is OK for a one-off (after all, we have to spell it
correctly at least once), but many of these constant
strings appear throughout the code.
This patch introduces a series of functions to "memoize"
these strings, which are essentially globals for the
lifetime of the program. We compute the value once, take
ownership of the buffer, and return the cached value for
subsequent calls. cache.h provides a helper macro for
defining these functions as one-liners, and defines a few
common ones for global use.
Using a macro is a little bit gross, but it does nicely
document the purpose of the functions. If we need to touch
them all later (e.g., because we learned how to change the
git_dir variable at runtime, and need to invalidate all of
the stored values), it will be much easier to have the
complete list.
Note that the shared-global functions have separate, manual
declarations. We could do something clever with the macros
(e.g., expand it to a declaration in some places, and a
declaration _and_ a definition in path.c). But there aren't
that many, and it's probably better to stay away from
too-magical macros.
Likewise, if we abandon the C preprocessor in favor of
generating these with a script, we could get much fancier.
E.g., normalizing "FOO/BAR-BAZ" into "git_path_foo_bar_baz".
But the small amount of saved typing is probably not worth
the resulting confusion to readers who want to grep for the
function's definition.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
There is no need to call it if value is the empty string. This also
eliminates code duplication.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The code for splitting a whitespace-separated list of values in
"remotes.<name>" had an off-by-one error that caused it to skip over
remotes whose names consist of a single character.
Also remove unnecessary braces.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The old version just looped over the references to delete, calling
delete_ref() on each one. But that has quadratic behavior, because
each call to delete_ref() might have to rewrite the packed-refs file.
This can be very expensive in a repository with a large number of
references. In some (admittedly extreme) repositories, we've seen
cases where the ref-pruning part of fetch takes multiple tens of
minutes.
Instead call delete_refs(), which (aside from being less code) has the
optimization that it only rewrites the packed-refs file a single time.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Change typedef each_ref_fn to take a "const struct object_id *oid"
parameter instead of "const unsigned char *sha1".
To aid this transition, implement an adapter that can be used to wrap
old-style functions matching the old typedef, which is now called
"each_ref_sha1_fn"), and make such functions callable via the new
interface. This requires the old function and its cb_data to be
wrapped in a "struct each_ref_fn_sha1_adapter", and that object to be
used as the cb_data for an adapter function, each_ref_fn_adapter().
This is an enormous diff, but most of it consists of simple,
mechanical changes to the sites that call any of the "for_each_ref"
family of functions. Subsequent to this change, the call sites can be
rewritten one by one to use the new interface.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A replacement for contrib/workdir/git-new-workdir that does not
rely on symbolic links and make sharing of objects and refs safer
by making the borrowee and borrowers aware of each other.
* nd/multiple-work-trees: (41 commits)
prune --worktrees: fix expire vs worktree existence condition
t1501: fix test with split index
t2026: fix broken &&-chain
t2026 needs procondition SANITY
git-checkout.txt: a note about multiple checkout support for submodules
checkout: add --ignore-other-wortrees
checkout: pass whole struct to parse_branchname_arg instead of individual flags
git-common-dir: make "modules/" per-working-directory directory
checkout: do not fail if target is an empty directory
t2025: add a test to make sure grafts is working from a linked checkout
checkout: don't require a work tree when checking out into a new one
git_path(): keep "info/sparse-checkout" per work-tree
count-objects: report unused files in $GIT_DIR/worktrees/...
gc: support prune --worktrees
gc: factor out gc.pruneexpire parsing code
gc: style change -- no SP before closing parenthesis
checkout: clean up half-prepared directories in --to mode
checkout: reject if the branch is already checked out elsewhere
prune: strategies for linked checkouts
checkout: support checking out into a new working directory
...
Simplify the ref transaction API around how "the ref should be
pointing at this object" is specified.
* mh/refs-have-new:
refs.h: remove duplication in function docstrings
update_ref(): improve documentation
ref_transaction_verify(): new function to check a reference's value
ref_transaction_delete(): check that old_sha1 is not null_sha1
ref_transaction_create(): check that new_sha1 is valid
commit: avoid race when creating orphan commits
commit: add tests of commit races
ref_transaction_delete(): remove "have_old" parameter
ref_transaction_update(): remove "have_old" parameter
struct ref_update: move "have_old" into "flags"
refs.c: change some "flags" to "unsigned int"
refs: remove the gap in the REF_* constant values
refs: move REF_DELETING to refs.c
Instead, verify the reference's old value if and only if old_sha1 is
non-NULL.
ref_transaction_delete() will get the same treatment in a moment.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Reviewed-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A few files include the same header file directly more than once.
As all these headers protect themselves against repeated inclusion
by the "#ifndef FOO_H / #define FOO_H / ... / #endif" idiom, leave
only the first inclusion and remove the later inclusion as a no-op
clean-up.
Signed-off-by: Дилян Палаузов <git-dpa@aegee.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Before the previous commit, get_pathname returns an array of PATH_MAX
length. Even if git_path() and similar functions does not use the
whole array, git_path() caller can, in theory.
After the commit, get_pathname() may return a buffer that has just
enough room for the returned string and git_path() caller should never
write beyond that.
Make git_path(), mkpath() and git_path_submodule() return a const
buffer to make sure callers do not write in it at all.
This could have been part of the previous commit, but the "const"
conversion is too much distraction from the core changes in path.c.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Corner-case bugfixes for "git fetch" around reflog handling.
* jk/fetch-reflog-df-conflict:
ignore stale directories when checking reflog existence
fetch: load all default config at startup
When we start the git-fetch program, we call git_config to
load all config, but our callback only processes the
fetch.prune option; we do not chain to git_default_config at
all.
This means that we may not load some core configuration
which will have an effect. For instance, we do not load
core.logAllRefUpdates, which impacts whether or not we
create reflogs in a bare repository.
Note that I said "may" above. It gets even more exciting. If
we have to transfer actual objects as part of the fetch,
then we call fetch_pack as part of the same process. That
function loads its own config, which does chain to
git_default_config, impacting global variables which are
used by the rest of fetch. But if the fetch is a pure ref
update (e.g., a new ref which is a copy of an old one), we
skip fetch_pack entirely. So we get inconsistent results
depending on whether or not we have actual objects to
transfer or not!
Let's just load the core config at the start of fetch, so we
know we have it (we may also load it again as part of
fetch_pack, but that's OK; it's designed to be idempotent).
Our tests check both cases (with and without a pack). We
also check similar behavior for push for good measure, but
it already works as expected.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Change s_update_ref to use a ref transaction for the ref update.
Signed-off-by: Ronnie Sahlberg <sahlberg@google.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Noticed-by: Matthew Flaschen <mflaschen@wikimedia.org>
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* jk/xstrfmt:
setup_git_env(): introduce git_path_from_env() helper
unique_path: fix unlikely heap overflow
walker_fetch: fix minor memory leak
merge: use argv_array when spawning merge strategy
sequencer: use argv_array_pushf
setup_git_env: use git_pathdup instead of xmalloc + sprintf
use xstrfmt to replace xmalloc + strcpy/strcat
use xstrfmt to replace xmalloc + sprintf
use xstrdup instead of xmalloc + strcpy
use xstrfmt in favor of manual size calculations
strbuf: add xstrfmt helper
It's easy to get manual allocation calculations wrong, and
the use of strcpy/strcat raise red flags for people looking
for buffer overflows (though in this case each site was
fine).
It's also shorter to use xstrfmt, and the printf-format
tends to be easier for a reader to see what the final string
will look like.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since the introduction of opportunisitic updates of remote-tracking
branches, started at around f2690487 (fetch: opportunistically
update tracking refs, 2013-05-11) with a few updates in v1.8.4 era,
the remote.*.fetch configuration always kicks in even when a refspec
to specify what to fetch is given on the command line, and there is
no way to disable or override it per-invocation.
Teach the command to pay attention to the --refmap=<lhs>:<rhs>
command-line options that can be used to override the use of
configured remote.*.fetch as the refmap.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
---
Shrink lifetime of variables by moving their definitions to an
inner scope where appropriate.
* ep/varscope:
builtin/gc.c: reduce scope of variables
builtin/fetch.c: reduce scope of variable
builtin/commit.c: reduce scope of variables
builtin/clean.c: reduce scope of variable
builtin/blame.c: reduce scope of variables
builtin/apply.c: reduce scope of variables
bisect.c: reduce scope of variable
Fetching from a shallow-cloned repository used to be forbidden,
primarily because the codepaths involved were not carefully vetted
and we did not bother supporting such usage. This attempts to allow
object transfer out of a shallow-cloned repository in a controlled
way (i.e. the receiver become a shallow repository with truncated
history).
* nd/shallow-clone: (31 commits)
t5537: fix incorrect expectation in test case 10
shallow: remove unused code
send-pack.c: mark a file-local function static
git-clone.txt: remove shallow clone limitations
prune: clean .git/shallow after pruning objects
clone: use git protocol for cloning shallow repo locally
send-pack: support pushing from a shallow clone via http
receive-pack: support pushing to a shallow clone via http
smart-http: support shallow fetch/clone
remote-curl: pass ref SHA-1 to fetch-pack as well
send-pack: support pushing to a shallow clone
receive-pack: allow pushes that update .git/shallow
connected.c: add new variant that runs with --shallow-file
add GIT_SHALLOW_FILE to propagate --shallow-file to subprocesses
receive/send-pack: support pushing from a shallow clone
receive-pack: reorder some code in unpack()
fetch: add --update-shallow to accept refs that update .git/shallow
upload-pack: make sure deepening preserves shallow roots
fetch: support fetching from a shallow repository
clone: support remote shallow repository
...
When we have a remote-tracking branch named "frotz/nitfol" from a
previous fetch, and the upstream now has a branch named "frotz",
fetch would fail to remove "frotz/nitfol" with a "git fetch --prune"
from the upstream. git would inform the user to use "git remote
prune" to fix the problem.
Change the way "fetch --prune" works by moving the pruning operation
before the fetching operation. This way, instead of warning the user
of a conflict, it autmatically fixes it.
Signed-off-by: Tom Miller <jackerran@gmail.com>
Tested-by: Thomas Rast <tr@thomasrast.ch>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If "fetch --prune" is run with no new refs to fetch, but it has refs
to prune. Then, the header url is not printed as it would if there were
new refs to fetch.
Output before this patch:
$ git fetch --prune remote-with-no-new-refs
x [deleted] (none) -> origin/world
Output after this patch:
$ git fetch --prune remote-with-no-new-refs
From https://github.com/git/git
x [deleted] (none) -> origin/test
Signed-off-by: Tom Miller <jackerran@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git fetch --depth=0" was a no-op, and was silently
ignored. Diagnose it as an error.
* nd/transport-positive-depth-only:
clone,fetch: catch non positive --depth option value
Remove a few duplicate implementations of prefix/suffix comparison
functions, and rename them to starts_with and ends_with.
* cc/starts-n-ends-with:
replace {pre,suf}fixcmp() with {starts,ends}_with()
strbuf: introduce starts_with() and ends_with()
builtin/remote: remove postfixcmp() and use suffixcmp() instead
environment: normalize use of prefixcmp() by removing " != 0"
The same steps are done as in when --update-shallow is not given. The
only difference is we now add all shallow commits in "ours" and
"theirs" to .git/shallow (aka "step 8").
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This patch just put together pieces from the 8 steps patch. We stop at
step 7 and reject refs that require new shallow commits.
Note that, by rejecting refs that require new shallow commits, we
leave dangling objects in the repo, which become "object islands" by
the next "git fetch" of the same source.
If the first fetch our "ours" set is zero and we do practically
nothing at step 7, "ours" is full at the next fetch and we may need to
walk through commits for reachability test. Room for improvement.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Instead of simply ignoring the value passed to --depth option when
it is zero or negative, catch and report it as an error to let
people know that they were using the option incorrectly.
Original-patch-by: Andrés G. Aragoneses <knocte@gmail.com>
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Leaving only the function definitions and declarations so that any
new topic in flight can still make use of the old functions, replace
existing uses of the prefixcmp() and suffixcmp() with new API
functions.
The change can be recreated by mechanically applying this:
$ git grep -l -e prefixcmp -e suffixcmp -- \*.c |
grep -v strbuf\\.c |
xargs perl -pi -e '
s|!prefixcmp\(|starts_with\(|g;
s|prefixcmp\(|!starts_with\(|g;
s|!suffixcmp\(|ends_with\(|g;
s|suffixcmp\(|!ends_with\(|g;
'
on the result of preparatory changes in this series.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Change the loop body into the more straightforward
* remove item from the front of the old list
* if necessary, add it to the tail of the new list
and return a pointer to the new list (even though it is currently
always the same as the input argument, because the first element in
the list is currently never deleted).
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If --no-prune is passed to one of the following commands:
git fetch --all
git fetch --multiple
git fetch --recurse-submodules
git remote update
then it must also be passed to the "fetch" subprocesses that those
commands use to do their work. Otherwise there might be a fetch.prune
or remote.<name>.prune configuration setting that causes pruning to
occur, contrary to the user's express wish.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The old behavior of "fetch --prune" was to prune whatever was being
fetched. In particular, "fetch --prune --tags" caused tags not only
to be fetched, but also to be pruned. This is inappropriate because
there is only one tags namespace that is shared among the local
repository and all remotes. Therefore, if the user defines a local
tag and then runs "git fetch --prune --tags", then the local tag is
deleted. Moreover, "--prune" and "--tags" can also be configured via
fetch.prune / remote.<name>.prune and remote.<name>.tagopt, making it
even less obvious that an invocation of "git fetch" could result in
tag lossage.
Since the command "git remote update" invokes "git fetch", it had the
same problem.
The command "git remote prune", on the other hand, disregarded the
setting of remote.<name>.tagopt, and so its behavior was inconsistent
with that of the other commands.
So the old behavior made it too easy to lose tags. To fix this
problem, change "fetch --prune" to prune references based only on
refspecs specified explicitly by the user, either on the command line
or via remote.<name>.fetch. Thus, tags are no longer made subject to
pruning by the --tags option or the remote.<name>.tagopt setting.
However, tags *are* still subject to pruning if they are fetched as
part of a refspec, and that is good. For example:
* On the command line,
git fetch --prune 'refs/tags/*:refs/tags/*'
causes tags, and only tags, to be fetched and pruned, and is
therefore a simple way for the user to get the equivalent of the old
behavior of "--prune --tag".
* For a remote that was configured with the "--mirror" option, the
configuration is set to include
[remote "name"]
fetch = +refs/*:refs/*
, which causes tags to be subject to pruning along with all other
references. This is the behavior that will typically be desired for
a mirror.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Previously, fetch's "--tags" option was considered equivalent to
specifying the refspec "refs/tags/*:refs/tags/*" on the command line;
in particular, it caused the remote.<name>.refspec configuration to be
ignored.
But it is not very useful to fetch tags without also fetching other
references, whereas it *is* quite useful to be able to fetch tags *in
addition to* other references. So change the semantics of this option
to do the latter.
If a user wants to fetch *only* tags, then it is still possible to
specifying an explicit refspec:
git fetch <remote> 'refs/tags/*:refs/tags/*'
Please note that the documentation prior to 1.8.0.3 was ambiguous
about this aspect of "fetch --tags" behavior. Commit
f0cb2f137c 2012-12-14 fetch --tags: clarify documentation
made the documentation match the old behavior. This commit changes
the documentation to match the new behavior.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The old code processed (tags == TAGS_SET) before adding the entries
used to opportunistically update references mentioned on the command
line. The result was that all tags were also considered candidates
for opportunistic updating.
This is harmless for two reasons: (a) because it would only add
entries if there is a configured refspec that covers tags *and* both
--tags and another refspec appear on the command-line; (b) because any
extra entries would be deleted later by the call to
ref_remove_duplicates() anyway.
But, to avoid extra work and extra memory usage, and to make the
implementation better match the intention, change the algorithm
slightly: compute the opportunistic refspecs based only on the
command-line arguments, storing the results into a separate temporary
list. Then add the tags (which have to come earlier in the list so
that they are not de-duped in favor of an opportunistic entry). Then
concatenate the temporary list onto the main list.
This change will also make later changes easier.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Reorder function definitions to avoid the need for a forward
declaration of function find_non_local_tags().
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Rename "refs" -> "refspecs" and "ref_count" -> "refspec_count" to
reduce confusion, because they describe an array of "struct refspec",
as opposed to the "struct ref" objects that are also used in this
function.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Give "update-refs" a "--stdin" option to read multiple update
requests and perform them in an all-or-none fashion.
* bk/refs-multi-update:
update-ref: add test cases covering --stdin signature
update-ref: support multiple simultaneous updates
refs: add update_refs for multiple simultaneous updates
refs: add function to repack without multiple refs
refs: factor delete_ref loose ref step into a helper
refs: factor update_ref steps into helpers
refs: report ref type from lock_any_ref_for_update
reset: rename update_refs to reset_refs
The auto-tag-following code in "git fetch" tries to reuse the same
transport twice when the serving end does not cooperate and does
not give tags that point to commits that are asked for as part of
the primary transfer. Unfortunately, Git-aware transport helper
interface is not designed to be used more than once, hence this
does not work over smart-http transfer.
* jc/transport-do-not-use-connect-twice-in-fetch:
builtin/fetch.c: Fix a sparse warning
fetch: work around "transport-take-over" hack
fetch: refactor code that fetches leftover tags
fetch: refactor code that prepares a transport
fetch: rename file-scope global "transport" to "gtransport"
t5802: add test for connect helper
Allow fetch.prune and remote.*.prune configuration variables to be set,
and "git fetch" to behave as if "--prune" is given.
"git fetch" that honors remote.*.prune is fine, but I wonder if we
should somehow make "git push" aware of it as well. Perhaps
remote.*.prune should not be just a boolean, but a 4-way "none",
"push", "fetch", "both"?
* ms/fetch-prune-configuration:
fetch: make --prune configurable
Expose lock_ref_sha1_basic's type_p argument to callers of
lock_any_ref_for_update. Update all call sites to ignore it by passing
NULL for now.
Signed-off-by: Brad King <brad.king@kitware.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Sparse issues an "'prepare_transport' was not declared. Should it
be static?" warning. In order to suppress the warning, since this
symbol only requires file scope, we simply add the static modifier
to it's declaration.
Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A Git-aware "connect" transport allows the "transport_take_over" to
redirect generic transport requests like fetch(), push_refs() and
get_refs_list() to the native Git transport handling methods. The
take-over process replaces transport->data with a fake data that
these method implementations understand.
While this hack works OK for a single request, it breaks when the
transport needs to make more than one requests. transport->data
that used to hold necessary information for the specific helper to
work correctly is destroyed during the take-over process.
One codepath that this matters is "git fetch" in auto-follow mode;
when it does not get all the tags that ought to point at the history
it got (which can be determined by looking at the peeled tags in the
initial advertisement) from the primary transfer, it internally
makes a second request to complete the fetch. Because "take-over"
hack has already destroyed the data necessary to talk to the
transport helper by the time this happens, the second request cannot
make a request to the helper to make another connection to fetch
these additional tags.
Mark such a transport as "cannot_reuse", and use a separate
transport to perform the backfill fetch in order to work around
this breakage.
Note that this problem does not manifest itself when running t5802,
because our upload-pack gives you all the necessary auto-followed
tags during the primary transfer. You would need to step through
"git fetch" in a debugger, stop immediately after the primary
transfer finishes and writes these auto-followed tags, remove the
tag references and repack/prune the repository to convince the
"find-non-local-tags" procedure that the primary transfer failed to
give us all the necessary tags, and then let it continue, in order
to trigger the bug in the secondary transfer this patch fixes.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Usually the upload-pack process running on the other side will give
us all the reachable tags we need during the primary object transfer
in do_fetch(). If that does not happen (e.g. the other side may be
running a third-party implementation of upload-pack), we will run
another fetch to pick up leftover tags that we know point at the
commits reachable from our updated tips.
Separate out the code to run this second fetch into a helper
function.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Make a helper function prepare_transport() that returns a transport
to talk to a given remote.
The set_option() helper that used to always affect the file-scope
global "gtransport" now takes a transport as its parameter.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Although many functions in this file take a "struct transport" as a
parameter, "fetch_one()" assigns to the global singleton instance
which is a file-scope static, in order to allow a parameterless
signal handler unlock_pack() to access it.
Rename the variable to gtransport to make sure these uses stand out.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This task emerged from b04ba2bb (parse-options: deprecate OPT_BOOLEAN,
2011-09-27). All occurrences of the respective variables have
been reviewed and none of them relied on the counting up mechanism,
but all of them were using the variable as a true boolean.
This patch does not change semantics of any command intentionally.
Signed-off-by: Stefan Beller <stefanbeller@googlemail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Without "git fetch --prune", remote-tracking branches for a branch
the other side already has removed will stay forever. Some people
want to always run "git fetch --prune".
To accommodate users who want to either prune always or when fetching
from a particular remote, add two new configuration variables
"fetch.prune" and "remote.<name>.prune":
- "fetch.prune" allows to enable prune for all fetch operations.
- "remote.<name>.prune" allows to change the behaviour per remote.
The latter will naturally override the former, and the --[no-]prune
option from the command line will override the configured default.
Since --prune is a potentially destructive operation (Git doesn't
keep reflogs for deleted references yet), we don't want to prune
without users consent, so this configuration will not be on by
default.
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Michael Schubert <mschub@elegosoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Define memory ownership and lifetime rules for what for-each-ref
feeds to its callbacks (in short, "you do not own it, so make a
copy if you want to keep it").
* mh/reflife: (25 commits)
refs: document the lifetime of the args passed to each_ref_fn
register_ref(): make a copy of the bad reference SHA-1
exclude_existing(): set existing_refs.strdup_strings
string_list_add_refs_by_glob(): add a comment about memory management
string_list_add_one_ref(): rename first parameter to "refname"
show_head_ref(): rename first parameter to "refname"
show_head_ref(): do not shadow name of argument
add_existing(): do not retain a reference to sha1
do_fetch(): clean up existing_refs before exiting
do_fetch(): reduce scope of peer_item
object_array_entry: fix memory handling of the name field
find_first_merges(): remove unnecessary code
find_first_merges(): initialize merges variable using initializer
fsck: don't put a void*-shaped peg in a char*-shaped hole
object_array_remove_duplicates(): rewrite to reduce copying
revision: use object_array_filter() in implementation of gc_boundary()
object_array: add function object_array_filter()
revision: split some overly-long lines
cmd_diff(): make it obvious which cases are exclusive of each other
cmd_diff(): rename local variable "list" -> "entry"
...
Its lifetime is not guaranteed, so make a copy. Free the memory when
the string_list is cleared.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Do not retain references to refnames passed to the each_ref_fn
callback add_existing(), because their lifetime is not guaranteed.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since commit f269048 (fetch: opportunistically update tracking refs,
2013-05-11) we update tracking refs opportunistically when fetching
remote branches. However, if there is a configured non-pattern refspec
that does not match any of the refspecs given on the command line then a
fatal error occurs.
Fix this by setting the "missing_ok" flag when calling get_fetch_map.
Test-added-by: Jeff King <peff@peff.net>
Signed-off-by: John Keeping <john@keeping.me.uk>
Acked-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we run a regular "git fetch" without arguments, we
update the tracking refs according to the configured
refspec. However, when we run "git fetch origin master" (or
"git pull origin master"), we do not look at the configured
refspecs at all, and just update FETCH_HEAD.
We miss an opportunity to update "refs/remotes/origin/master"
(or whatever the user has configured). Some users find this
confusing, because they would want to do further comparisons
against the old state of the remote master, like:
$ git pull origin master
$ git log HEAD...origin/master
In the currnet code, they are comparing against whatever
commit happened to be in origin/master from the last time
they did a complete "git fetch". This patch will update a
ref from the RHS of a configured refspec whenever we happen
to be fetching its LHS. That makes the case above work.
The downside is that any users who really care about whether
and when their tracking branches are updated may be
surprised.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Each "struct ref" has a boolean flag that is set by the
fetch code to determine whether the ref should be marked as
"not-for-merge" or not when we write it out to FETCH_HEAD.
It would be useful to turn this boolean into a tri-state,
with the third state meaning "do not bother writing it out
to FETCH_HEAD at all". That would let us add extra refs to
the set of refs to be stored (e.g., to store copies of
things we fetched) without impacting FETCH_HEAD.
This patch turns it into an enum that covers the tri-state
case, and hopefully makes the code more explicit and easier
to read.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Help "fetch only" repositories that do not trigger "gc --auto"
often enough.
* jk/gc-auto-after-fetch:
fetch-pack: avoid repeatedly re-scanning pack directory
fetch: run gc --auto after fetching
We generally try to run "gc --auto" after any commands that
might introduce a large number of new objects. An obvious
place to do so is after running "fetch", which may introduce
new loose objects or packs (depending on the size of the
fetch).
While an active developer repository will probably
eventually trigger a "gc --auto" on another action (e.g.,
git-rebase), there are two good reasons why it is nicer to
do it at fetch time:
1. Read-only repositories which track an upstream (e.g., a
continuous integration server which fetches and builds,
but never makes new commits) will accrue loose objects
and small packs, but never coalesce them into a more
efficient larger pack.
2. Fetching is often already perceived to be slow to the
user, since they have to wait on the network. It's much
more pleasant to include a potentially slow auto-gc as
part of the already-long network fetch than in the
middle of productive work with git-rebase or similar.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The user can do --depth=2147483647 (*) for restoring complete repo
now. But it's hard to remember. Any other numbers larger than the
longest commit chain in the repository would also do, but some
guessing may be involved. Make easy-to-remember --unshallow an alias
for --depth=2147483647.
Make upload-pack recognize this special number as infinite depth. The
effect is essentially the same as before, except that upload-pack is
more efficient because it does not have to traverse to the bottom
anymore.
The chance of a user actually wanting exactly 2147483647 commits
depth, not infinite, on a repository with a history that long, is
probably too small to consider. The client can learn to add or
subtract one commit to avoid the special treatment when that actually
happens.
(*) This is the largest positive number a 32-bit signed integer can
contain. JGit and older C Git store depth as "int" so both are OK
with this number. Dulwich does not support shallow clone.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git fetch --all", when passed "--no-tags", did not honor the
"--no-tags" option while fetching from individual remotes (the same
issue existed with "--tags", but combination "--all --tags" makes
much less sense than "--all --no-tags").
* dj/fetch-all-tags:
fetch --all: pass --tags/--no-tags through to each remote
submodule: use argv_array instead of hand-building arrays
fetch: use argv_array instead of hand-building arrays
argv-array: fix bogus cast when freeing array
argv-array: add pop function
The status report from "git fetch", when messages like 'up-to-date'
are translated, did not align the branch names well.
* nd/fetch-status-alignment:
fetch: align per-ref summary report in UTF-8 locales
fetch does printf("%-*s", width, "foo") where "foo" can be a utf-8
string, but width is in bytes, not columns. For ASCII it's fine as one
byte takes one column. For utf-8, this may result in misaligned ref
summary table.
Introduce gettext_width() function that returns the string length in
columns (currently only supports utf-8 locales). Make the code use
TRANSPORT_SUMMARY(x) where the length is compensated properly in
non-English locales.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git fetch --all", when passed "--no-tags", did not honor the
"--no-tags" option while fetching from individual remotes (the same
issue existed with "--tags", but combination "--all --tags" makes
much less sense than "--all --no-tags").
* dj/fetch-all-tags:
fetch --all: pass --tags/--no-tags through to each remote
Use argv-array API in "git fetch" implementation.
* jk/argv-array:
submodule: use argv_array instead of hand-building arrays
fetch: use argv_array instead of hand-building arrays
argv-array: fix bogus cast when freeing array
argv-array: add pop function
Optimise the "merge-base" computation a bit, and also update its
users that do not need the full merge-base information to call a
cheaper subset.
* jc/merge-bases:
reduce_heads(): reimplement on top of remove_redundant()
merge-base: "--is-ancestor A B"
get_merge_bases_many(): walk from many tips in parallel
in_merge_bases(): use paint_down_to_common()
merge_bases_many(): split out the logic to paint history
in_merge_bases(): omit unnecessary redundant common ancestor reduction
http-push: use in_merge_bases() for fast-forward check
receive-pack: use in_merge_bases() for fast-forward check
in_merge_bases(): support only one "other" commit
When fetch is invoked with --all, we need to pass the tag-following
preference to each individual fetch; without this, we will always
auto-follow tags, preventing us from fetching the remote tags into a
remote-specific namespace, for example.
Reported-by: Oswald Buddenhagen <ossi@kde.org>
Signed-off-by: Dan Johnson <ComputerDruid@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
fetch_populated_submodules() allocates the full argv array it uses to
recurse into the submodules from the number of given options plus the six
argv values it is going to add. It then initializes it with those values
which won't change during the iteration and copies the given options into
it. Inside the loop the two argv values different for each submodule get
replaced with those currently valid.
However, this technique is brittle and error-prone (as the comment to
explain the magic number 6 indicates), so let's replace it with an
argv_array. Instead of replacing the argv values, push them to the
argv_array just before the run_command() call (including the option
separating them) and pop them from the argv_array right after that.
Signed-off-by: Jens Lehmann <Jens.Lehmann@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fetch invokes itself recursively when recursing into
submodules or handling "fetch --multiple". In both cases, it
builds the child's command line by pushing options onto a
statically-sized array. In both cases, the array is
currently just big enough to handle the largest possible
case. However, this technique is brittle and error-prone, so
let's replace it with a dynamic argv_array.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In early days of its life, I planned to make it possible to compute
"is a commit contained in all of these other commits?" with this
function, but it turned out that no caller needed it.
Just make it take two commit objects and add a comment to say what
these two functions do.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
More message strings marked for i18n.
By Nguyễn Thái Ngọc Duy (10) and Jonathan Nieder (1)
* nd/i18n:
help: replace underlining "help -a" headers using hyphens with a blank line
i18n: bundle: mark strings for translation
i18n: index-pack: mark strings for translation
i18n: apply: update say_patch_name to give translators complete sentence
i18n: apply: mark strings for translation
i18n: remote: mark strings for translation
i18n: make warn_dangling_symref() automatically append \n
i18n: help: mark strings for translation
i18n: mark relative dates for translation
strbuf: convenience format functions with \n automatically appended
Makefile: feed all header files to xgettext
The report from "git fetch" said "new branch" even for a non branch
ref.
By Marc Branchaud
* mb/fetch-call-a-non-branch-a-ref:
fetch: describe new refs based on where it came from
fetch: Give remote_ref to update_local_ref() as well
This helps remove \n from translatable strings
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git fetch" that recurses into submodules on demand did not check if
it needs to go into submodules when non branches (most notably, tags)
are fetched.
By Jens Lehmann
* jl/maint-submodule-recurse-fetch:
submodules: recursive fetch also checks new tags for submodule commits
update_local_ref() used to say "[new branch]" when we stored a new ref
outside refs/tags/ hierarchy, but the message is more about what we
fetched, so use the refname at the origin to make that decision.
Also, only call a new ref a "branch" if it's under refs/heads/.
Signed-off-by: Marc Branchaud <marcnarc@xiplink.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This way, the function can look at the remote side to adjust the
informational message it gives.
Signed-off-by: Marc Branchaud <marcnarc@xiplink.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since 88a21979c (fetch/pull: recurse into submodules when necessary) all
fetched commits are examined if they contain submodule changes (unless
configuration or command line options inhibit that). If a newly recorded
submodule commit is not present in the submodule, a fetch is run inside
it to download that commit.
Checking new refs was done in an else branch where it wasn't executed for
tags. This normally isn't a problem because tags are only fetched with
the branches they live on, then checking the new commits in the fetched
branches for submodule commits will also process all tags. But when a
specific tag is fetched (or the refspec contains refs/tags/) commits only
reachable by tags won't be searched for submodule commits, which is a bug.
Fix that by moving the code outside the if/else construct to handle new
tags just like any other ref. The performance impact of adding tags that
most of the time lie on a branch which is checked anyway for new submodule
commit should be minimal, as since 6859de4 (fetch: avoid quadratic loop
checking for updated submodules) all ref-tips are collected first and then
fed to a single rev-list.
Spotted-by: Jeff King <peff@peff.net>
Signed-off-by: Jens Lehmann <Jens.Lehmann@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
By default, progress output is disabled if stderr is not a terminal.
The --progress option can be used to force progress output anyways.
Conversely, --no-progress does not force progress output. In particular,
if stderr is a terminal, progress output is enabled.
This is unintuitive. Change --no-progress to force output off.
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The reason why the trailing slash is needed is obvious. refs/stash and
HEAD are not namespace, but complete refs. Do full string compare on them.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The FETCH_HEAD refname is supposed to refer to the ref that was fetched
and should be merged. However all fetched refs are written to
.git/FETCH_HEAD in an arbitrary order, and resolve_ref_unsafe simply
takes the first ref as the FETCH_HEAD, which is often the wrong one,
when other branches were also fetched.
The solution is to write the for-merge ref(s) to FETCH_HEAD first.
Then, unless --append is used, the FETCH_HEAD refname behaves as intended.
If the user uses --append, they presumably are doing so in order to
preserve the old FETCH_HEAD.
While we are at it, update an old example in the read-tree documentation
that implied that each entry in FETCH_HEAD only has the object name, which
is not true for quite a while.
[jc: adjusted tests]
Signed-off-by: Joey Hess <joey@kitenet.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* maint-1.7.7:
Git 1.7.7.5
Git 1.7.6.5
blame: don't overflow time buffer
fetch: create status table using strbuf
checkout,merge: loosen overwriting untracked file check based on info/exclude
cast variable in call to free() in builtin/diff.c and submodule.c
apply: get rid of useless x < 0 comparison on a size_t type
Conflicts:
Documentation/git.txt
GIT-VERSION-GEN
RelNotes
builtin/fetch.c
When we fetch from a remote, we print a status table like:
From url
* [new branch] foo -> origin/foo
We create this table in a static buffer using sprintf. If
the remote refnames are long, they can overflow this buffer
and smash the stack.
Instead, let's use a strbuf to build the string.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* jc/pull-signed-tag:
commit-tree: teach -m/-F options to read logs from elsewhere
commit-tree: update the command line parsing
commit: teach --amend to carry forward extra headers
merge: force edit and no-ff mode when merging a tag object
commit: copy merged signed tags to headers of merge commit
merge: record tag objects without peeling in MERGE_HEAD
merge: make usage of commit->util more extensible
fmt-merge-msg: Add contents of merged tag in the merge message
fmt-merge-msg: package options into a structure
fmt-merge-msg: avoid early returns
refs DWIMmery: use the same rule for both "git fetch" and others
fetch: allow "git fetch $there v1.0" to fetch a tag
merge: notice local merging of tags and keep it unwrapped
fetch: do not store peeled tag object names in FETCH_HEAD
Split GPG interface into its own helper library
Conflicts:
builtin/fmt-merge-msg.c
builtin/merge.c
We do not want to record tags as parents of a merge when the user does
"git pull $there tag v1.0" to merge tagged commit, but that is not a good
enough excuse to peel the tag down to commit when storing in FETCH_HEAD.
The caller of underlying "git fetch $there tag v1.0" may have other uses
of information contained in v1.0 tag in mind.
[jc: the test adjustment is to update for the new expectation]
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* cn/fetch-prune:
fetch: treat --tags like refs/tags/*:refs/tags/* when pruning
fetch: honor the user-provided refspecs when pruning refs
remote: separate out the remote_find_tracking logic into query_refspecs
t5510: add tests for fetch --prune
fetch: free all the additional refspecs
Conflicts:
remote.c
If --tags is specified, add that refspec to the list given to
prune_refs so it knows to treat it as a filter on what refs to
should consider for prunning. This way
git fetch --prune --tags origin
only prunes tags and doesn't delete the branch refs.
Signed-off-by: Carlos Martín Nieto <cmn@elego.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If the user gave us refspecs on the command line, we should use those
when deciding whether to prune a ref instead of relying on the
refspecs in the config.
Previously, running
git fetch --prune origin refs/heads/master:refs/remotes/origin/master
would delete every other ref under the origin namespace because we
were using the refspec to filter the available refs but using the
configured refspec to figure out if a ref had been deleted on the
remote. This is clearly the wrong thing to do.
Change prune_refs and get_stale_heads to simply accept a list of
references and a list of refspecs. The caller of either function needs
to decide what refspecs should be used to decide whether a ref is
stale.
Signed-off-by: Carlos Martín Nieto <cmn@elego.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Close FETCH_HEAD and release the string url even if we have to leave the
function store_updated_refs() early.
Reported-by: Chris Wilson <cwilson@vigilantsw.com>
Helped-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* jc/receive-verify:
receive-pack: check connectivity before concluding "git push"
check_everything_connected(): libify
check_everything_connected(): refactor to use an iterator
fetch: verify we have everything we need before updating our ref
Conflicts:
builtin/fetch.c
* jc/fetch-verify:
fetch: verify we have everything we need before updating our ref
rev-list --verify-object
list-objects: pass callback data to show_objects()
Extract the helper function and the type definition of the iterator
function it uses out of builtin/fetch.c into a separate source and a
header file.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We will be using the same "rev-list --verify-objects" logic to add a
sanity check to the receiving end of "git push" in the same way, but the
list of commits that are checked come from a structure with a different
shape over there.
Update the function to take an iterator to make it easier to reuse it in
different contexts.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The "git fetch" command works in two phases. The remote side tells us what
objects are at the tip of the refs we are fetching from, and transfers the
objects missing from our side. After storing the objects in our repository,
we update our remote tracking branches to point at the updated tips of the
refs.
A broken or malicious remote side could send a perfectly well-formed pack
data during the object transfer phase, but there is no guarantee that the
given data actually fill the gap between the objects we originally had and
the refs we are updating to.
Although this kind of breakage can be caught by running fsck after a
fetch, it is much cheaper to verify that everything that is reachable from
the tips of the refs we fetched are indeed fully connected to the tips of
our current set of refs before we update them.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It makes no sense to do the - possibly very expensive - call to "rev-list
<new-ref-sha1> --not --all" in check_for_new_submodule_commits() when
there aren't any submodules configured.
Leave check_for_new_submodule_commits() early when no name <-> path
mappings for submodules are found in the configuration. To make that work
reading the configuration had to be moved further up in cmd_fetch(), as
doing that after the actual fetch of the superproject was too late.
Reported-by: Martin Fick <mfick@codeaurora.org>
Signed-off-by: Jens Lehmann <Jens.Lehmann@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The "git fetch" command works in two phases. The remote side tells us what
objects are at the tip of the refs we are fetching from, and transfers the
objects missing from our side. After storing the objects in our repository,
we update our remote tracking branches to point at the updated tips of the
refs.
A broken or malicious remote side could send a perfectly well-formed pack
data during the object transfer phase, but there is no guarantee that the
given data actually fill the gap between the objects we originally had and
the refs we are updating to.
Although this kind of breakage can be caught by running fsck after a
fetch, it is much cheaper to verify that everything that is reachable from
the tips of the refs we fetched are indeed fully connected to the tips of
our current set of refs before we update them.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* jl/submodule-fetch-on-demand:
fetch/pull: Describe --recurse-submodule restrictions in the BUGS section
submodule update: Don't fetch when the submodule commit is already present
fetch/pull: Don't recurse into a submodule when commits are already present
Submodules: Add 'on-demand' value for the 'fetchRecurseSubmodule' option
config: teach the fetch.recurseSubmodules option the 'on-demand' value
fetch/pull: Add the 'on-demand' value to the --recurse-submodules option
fetch/pull: recurse into submodules when necessary
Conflicts:
builtin/fetch.c
submodule.c