As sequencer_add_exec_commands() is only needed inside of
rebase--interactive.c for `rebase -p', it is moved there from
sequencer.c.
The parameter r (repository) is dropped along the way.
Signed-off-by: Alban Gruin <alban.gruin@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
complete_action() used functions that read the todo-list file, made some
changes to it, and wrote it back to the disk.
The previous commits were dedicated to separate the part that deals with
the file from the actual logic of these functions. Now that this is
done, we can call directly the "logic" functions to avoid useless file
access.
The parsing of the list has to be done by the caller. If the buffer of
the todo list provided by the caller is empty, a `noop' command is
directly added to the todo list, without touching the buffer.
Signed-off-by: Alban Gruin <alban.gruin@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This makes sequencer_make_script() write its script to a strbuf (ie. the
buffer of a todo_list) instead of a FILE. This reduce the amount of
read/write made by rebase interactive.
Signed-off-by: Alban Gruin <alban.gruin@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This refactors rearrange_squash() to work on a todo_list to avoid
redundant reads and writes. The function is renamed
todo_list_rearrange_squash().
The old version created a new buffer, which was directly written to the
disk. This new version creates a new item list by just copying items
from the old item list, without creating a new buffer. This eliminates
the need to reparse the todo list, but this also means its buffer cannot
be directly written to the disk.
As rebase -p still need to check the todo list from the disk, a new
function is introduced, rearrange_squash_in_todo_file().
complete_action() still uses rearrange_squash_in_todo_file() for now.
This will be changed in a future commit.
Signed-off-by: Alban Gruin <alban.gruin@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This refactors sequencer_add_exec_commands() to work on a todo_list to
avoid redundant reads and writes to the disk.
Instead of inserting the `exec' commands between the other commands and
re-parsing the buffer at the end, they are appended to the buffer once,
and a new list of items is created. Items from the old list are copied
across and new `exec' items are appended when necessary. This
eliminates the need to reparse the buffer, but this also means we have
to use todo_list_write_to_disk() to write the file.
todo_list_add_exec_commands() and sequencer_add_exec_commands() are
modified to take a string list instead of a string -- one item for each
command. This makes it easier to insert a new command to the todo list
for each command to execute.
sequencer_add_exec_commands() still reads the todo list from the disk,
as it is needed by rebase -p.
complete_action() still uses sequencer_add_exec_commands() for now.
This will be changed in a future commit.
Signed-off-by: Alban Gruin <alban.gruin@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Our compat/bswap.h lacks the usual preprocessor guards against multiple
inclusion. This usually isn't an issue since it only gets included from
git-compat-util.h, which has its own guards. But it would produce
redeclaration errors if any file included it separately.
Our hdr-check target would complain about this, except that it currently
skips items in compat/ entirely.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If the GCRYPT_SHA256 build variable is not set, then the 'hdr-check'
target complains about the missing <gcrypt.h> header file. Add the
'sha256/gcrypt.h' header file to the exception list, if the build
variable is not defined. While here, replace the 'xdiff%' filter
pattern with 'xdiff/%' (and similarly for the compat pattern) since
the original pattern inadvertently excluded the 'xdiff-interface.h'
header.
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
git-reset.txt contained a missing "a" and "wrt". Fix the missing "a" for
correctness and replace "wrt" with "with respect to" so that the
documentation is not so cryptic.
Signed-off-by: Denton Liu <liu.denton@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The end of sentence in "x." at the begining of a line misleads
ascidoctor into interpreting it as the start of numbered sub-list.
Signed-off-by: Jean-Noël Avila <jn.avila@free.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The --connectivity-only option avoids opening every object, and instead
just marks reachable objects with a flag and compares this to the set
of all objects. This strategy is discussed in more detail in 3e3f8bd608
(fsck: prepare dummy objects for --connectivity-check, 2017-01-17).
This means that we report _every_ unreachable object as dangling.
Whereas in a full fsck, we'd have actually opened and parsed each of
those unreachable objects, marking their child objects with the USED
flag, to mean "this was mentioned by another object". And thus we can
report only the tip of an unreachable segment of the object graph as
dangling.
You can see this difference with a trivial example:
tree=$(git hash-object -t tree -w /dev/null)
one=$(echo one | git commit-tree $tree)
two=$(echo two | git commit-tree -p $one $tree)
Running `git fsck` will report only $two as dangling, but with
--connectivity-only, both commits (and the tree) are reported. Likewise,
using --lost-found would write all three objects.
We can make --connectivity-only work like the normal case by taking a
separate pass over the unreachable objects, parsing them and marking
objects they refer to as USED. That still avoids parsing any blobs,
though we do pay the cost to access any unreachable commits and trees
(which may or may not be noticeable, depending on how many you have).
If neither --dangling nor --lost-found is in effect, then we can skip
this step entirely, just like we do now. That makes "--connectivity-only
--no-dangling" just as fast as the current "--connectivity-only". I.e.,
we do the correct thing always, but you can still tweak the options to
make it faster if you don't care about dangling objects.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
On reading this again, there are two things that were not immediately
clear to me:
- we do still check links to blobs, even though we don't open the
blobs themselves
- we do not do the normal fsck checks, even for non-blob objects we do
open
Let's reword it to make these points a little more clear.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This fixes a regression in 7c0fe330d5 (rev-list: handle missing tree
objects properly, 2018-10-05) where rev-list will now complain about the
empty tree when it doesn't physically exist on disk.
Before that commit, we relied on the traversal code in list-objects.c to
walk through the trees. Since it uses parse_tree(), we'd do a normal
object lookup that includes looking in the set of "cached" objects
(which is where our magic internal empty-tree kicks in).
After that commit, we instead tell list-objects.c not to die on any
missing trees, and we check them ourselves using has_object_file(). But
that function uses OBJECT_INFO_SKIP_CACHED, which means we won't use our
internal empty tree.
This normally wouldn't come up. For most operations, Git will try to
write out the empty tree object as it would any other object. And
pack-objects in a push or fetch will send the empty tree (even if it's
virtual on the sending side). However, there are cases where this can
matter. One I found in the wild:
1. The root tree of a commit became empty by deleting all files,
without using an index. In this case it was done using libgit2's
tree builder API, but as the included test shows, it can easily be
done with regular git using hash-object.
The resulting repo works OK, as we'd avoid walking over our own
reachable commits for a connectivity check.
2. Cloning with --reference pointing to the repository from (1) can
trigger the problem, because we tell the other side we already have
that commit (and hence the empty tree), but then walk over it
during the connectivity check (where we complain about it missing).
Arguably the workflow in step (1) should be more careful about writing
the empty tree object if we're referencing it. But this workflow did
work prior to 7c0fe330d5, so let's restore it.
This patch makes the minimal fix, which is to swap out a direct call to
oid_object_info_extended(), minus the SKIP_CACHED flag, instead of
calling has_object_file(). This is all that has_object_file() is doing
under the hood. And there's little danger of unrelated fallout from
other unexpected "cached" objects, since there's only one call site that
ends such a cached object, and it's in git-blame.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Just like 47abd85ba0 (fetch: Strip usernames from url's before storing
them, 2009-04-17) and later 882d49ca5c (push: anonymize URL in status
output, 2016-07-13), this change anonymizes URLs (read: strips them of
user names and especially passwords) in user-facing error messages and
warnings.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In d85b0dff72 (Makefile: use `find` to determine static header
dependencies, 2014-08-25), we switched from a static list of header
files to a dynamically-generated one, asking `find` to enumerate them.
Back in those days, we did not use `$(LIB_H)` by default, and many a
`make` implementation seems smart enough not to run that `find` command
in that case, so it was deemed okay to run `find` for special targets
requiring this macro.
However, as of ebb7baf02f (Makefile: add a hdr-check target,
2018-09-19), $(LIB_H) is part of a global rule and therefore must be
expanded. Meaning: this `find` command has to be run upon every
`make` invocation. In the presence of many a worktree, this can tax the
developers' patience quite a bit.
Even in the absence of worktrees or other untracked files and
directories, the cost of I/O to generate that list of header files is
simply a lot larger than a simple `git ls-files` call.
Therefore, just like in 335339758c (Makefile: ask "ls-files" to list
source files if available, 2011-10-18), we now prefer to use `git
ls-files` to enumerate the header files to enumerating them via `find`,
falling back to the latter if the former failed (which would be the case
e.g. in a worktree that was extracted from a source .tar file rather
than from a clone of Git's sources).
This has one notable consequence: we no longer include `command-list.h`
in `LIB_H`, as it is a generated file, not a tracked one, but that is
easily worked around. Of the three sites that use `LIB_H`, two
(`LOCALIZED_C` and `CHK_HDRS`) already handle generated headers
separately. In the third, the computed-dependency fallback, we can just
add in a reference to $(GENERATED_H).
Likewise, we no longer include not-yet-tracked header files in `LIB_H`.
Given the speed improvements, these consequences seem a comparably small
price to pay.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Acked-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The default SIGPIPE behavior can be useful for a command that generates
a lot of output: if the receiver of our output goes away, we'll be
notified asynchronously to stop generating it (typically by killing the
program).
But for a command like fetch, which is primarily concerned with
receiving data and writing it to disk, an unexpected SIGPIPE can be
awkward. We're already checking the return value of all of our write()
calls, and dying due to the signal takes away our chance to gracefully
handle the error.
On Linux, we wouldn't generally see SIGPIPE at all during fetch. If the
other side of the network connection hangs up, we'll see ECONNRESET. But
on OS X, we get a SIGPIPE, and the process is killed. This causes t5570
to racily fail, as we sometimes die by signal (instead of the expected
die() call) when the server side hangs up.
Let's ignore SIGPIPE during the network portion of the fetch, which will
cause our write() to return EPIPE, giving us consistent behavior across
platforms.
This fixes the test flakiness, but note that it stops short of fixing
the larger problem. The server side hit a fatal error, sent us an "ERR"
packet, and then hung up. We notice the failure because we're trying to
write to a closed socket. But by dying immediately, we never actually
read the ERR packet and report its content to the user. This is a (racy)
problem on all platforms. So this patch lays the groundwork from which
that problem might be fixed consistently, but it doesn't actually fix
it.
Note the placement of the SIGPIPE handling. The absolute minimal change
would be to ignore SIGPIPE only when we're writing. But twiddling the
signal handler for each write call is inefficient and maintenance
burden. On the opposite end of the spectrum, we could simply declare
that fetch does not need SIGPIPE handling, since it doesn't generate a
lot of output, and we could just ignore it at the start of cmd_fetch().
This patch takes a middle ground. It ignores SIGPIPE during the network
operation (which is admittedly most of the program, since the actual
network operations are all done under the hood by the transport code).
So it's still pretty coarse.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The write_or_die() function has one quirk that a caller might not
expect: when it sees EPIPE from the write() call, it translates that
into a death by SIGPIPE. This doesn't change the overall behavior (the
program exits either way), but it does potentially confuse test scripts
looking for a non-signal exit code.
Let's switch away from using write_or_die() in a few code paths, which
will give us more consistent exit codes. It also gives us the
opportunity to write more descriptive error messages, since we have
context that write_or_die() does not.
Note that this won't do much by itself, since we'd typically be killed
by SIGPIPE before write_or_die() even gets a chance to do its thing.
That will be addressed in the next patch.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Technically, the scripted version set ORIG_HEAD only in two spots (which
really could have been one, because it called `git checkout $onto^0` to
start the rebase and also if it could take a shortcut, and in both cases
it called `git update-ref $orig_head`).
Practically, it *implicitly* reset ORIG_HEAD whenever `git reset --hard`
was called.
However, what we really want is that it is set exactly once, at the
beginning of the rebase.
So let's do that.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The ORIG_HEAD pseudo ref is supposed to refer to the original,
pre-rebase state after a successful rebase. Let's add a regression test
to prove that this regressed: With GIT_TEST_REBASE_USE_BUILTIN=false,
this test case passes, with GIT_TEST_REBASE_USE_BUILTIN=true (or unset),
it fails.
Reported by Nazri Ramliy.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
By mistake, we used the reflog intended for ORIG_HEAD.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the case that the rebase boils down to a fast-forward, the built-in
rebase reset the working tree twice: once to start the rebase at `onto`,
then realizing that the original (pre-rebase) HEAD was an ancestor and
we basically already fast-forwarded to the post-rebase HEAD,
`reset_head()` was called to update the original ref and to point HEAD
back to it.
That second `reset_head()` call does not need to touch the working tree,
though, as it does not change the actual tip commit (and therefore the
working tree should stay unchanged anyway): only the ref needs to be
updated (because the rebase detached the HEAD, and we want to go back to
the branch on which the rebase was started).
But that second `reset_head()` was called without the flag to leave the
working tree alone (the reason: when that call was introduced, that flag
was not yet even thought of). Let's avoid that unnecessary work by
passing that flag.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The --stress option currently accepts an argument, but it is confusing
to at least this user that the argument does not define the maximal
number of stress iterations, but instead the number of jobs to run in
parallel per stress iteration.
Let's introduce a separate option for that, whose name makes it more
obvious what it is about, and let --stress=<N> error out with a helpful
suggestion about the two options tha could possibly have been meant.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It does not make much sense that running a test with
--stress-limit=<N> seemingly ignores that option because it does not
stress test at all.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When transmitting and receiving POSTs for protocol v0 and v1,
remote-curl uses post_rpc() (and associated functions), but when doing
the same for protocol v2, it uses a separate set of functions
(proxy_rpc() and others). Besides duplication of code, this has caused
at least one bug: the auth retry mechanism that was implemented in v0/v1
was not implemented in v2.
To fix this issue and avoid it in the future, make remote-curl also use
post_rpc() when handling protocol v2. Because line lengths are written
to the HTTP request in protocol v2 (unlike in protocol v0/v1), this
necessitates changes in post_rpc() and some of the functions it uses;
perform these changes too.
A test has been included to ensure that the code for both the unchunked
and chunked variants of the HTTP request is exercised.
Note: stateless_connect() has been updated to use the lower-level packet
reading functions instead of struct packet_reader. The low-level control
is necessary here because we cannot change the destination buffer of
struct packet_reader while it is being used; struct packet_buffer has a
peeking mechanism which relies on the destination buffer being present
in between a peek and a read.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
After we set up a `struct repository_format`, it owns various pieces of
allocated memory. We then either use those members, because we decide we
want to use the "candidate" repository format, or we discard the
candidate / scratch space. In the first case, we transfer ownership of
the memory to a few global variables. In the latter case, we just
silently drop the struct and end up leaking memory.
Introduce an initialization macro `REPOSITORY_FORMAT_INIT` and a
function `clear_repository_format()`, to be used on each side of
`read_repository_format()`. To have a clear and simple memory ownership,
let all users of `struct repository_format` duplicate the strings that
they take from it, rather than stealing the pointers.
Call `clear_...()` at the start of `read_...()` instead of just zeroing
the struct, since we sometimes enter the function multiple times. Thus,
it is important to initialize the struct before calling `read_...()`, so
document that. It's also important because we might not even call
`read_...()` before we call `clear_...()`, see, e.g., builtin/init-db.c.
Teach `read_...()` to clear the struct on error, so that it is reset to
a safe state, and document this. (In `setup_git_directory_gently()`, we
look at `repo_fmt.hash_algo` even if `repo_fmt.version` is -1, which we
weren't actually supposed to do per the API. After this commit, that's
ok.)
We inherit the existing code's combining "error" and "no version found".
Both are signalled through `version == -1` and now both cause us to
clear any partial configuration we have picked up. For "extensions.*",
that's fine, since they require a positive version number. For
"core.bare" and "core.worktree", we're already verifying that we have a
non-negative version number before using them.
Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since Travis did not support Windows (and now only supports very limited
Windows jobs, too limited for our use, the test suite would time out
*all* the time), we added a hack where a Travis job would trigger an
Azure Pipeline (which back then was still called VSTS Build), wait for
it to finish (or time out), and download the log (if available).
Needless to say that it was a horrible hack, necessitated by a bad
situation.
Nowadays, however, we have Azure Pipelines support, and do not need that
hack anymore. So let's retire it.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>