When we are doing a destructive operation like "git prune",
we want to be extra careful that the set of reachable tips
we compute is valid. If there is any corruption or oddity,
we are better off aborting the operation and letting the
user figure things out rather than plowing ahead and
possibly deleting some data that cannot be recovered.
The tests here include:
1. Pruning objects mentioned only be refs with invalid
names. This used to abort prior to d0f810f (refs.c:
allow listing and deleting badly named refs,
2014-09-03), but since then we silently ignore the tip.
Likewise, we test repacking that can drop objects
(either "-ad", which drops anything unreachable,
or "-Ad --unpack-unreachable=<time>", which tries to
optimize out a loose object write that would be
directly pruned).
2. Pruning objects when some refs point to missing
objects. We don't know whether any dangling objects
would have been reachable from the missing objects. We
are better to keep them around, as they are better than
nothing for helping the user recover history.
3. Packed refs that point to missing objects can sometimes
be dropped. By itself, this is more of an annoyance
(you do not have the object anyway; even if you can
recover it from elsewhere, all you are losing is a
placeholder for your state at the time of corruption).
But coupled with (2), if we drop the ref and then go
on to prune, we may lose unrecoverable objects.
Note that we use test_might_fail for some of the operations.
In some cases, it would be appropriate to abort the
operation, and in others, it might be acceptable to continue
but taking the information into account. The tests don't
care either way, and check only for data loss.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The different index versions have different sha-1 checksums. Those
checksums are checked in t1700, which makes it fail when the test suite
is run with TEST_GIT_INDEX_VERSION=4. Fix it.
Signed-off-by: Thomas Gummerer <t.gummerer@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
All of these cases are moderate since they would most probably not
lead to missed failing tests; either they would fail otherwise, or
fail a rm in test_when_finished only.
Signed-off-by: Michael J Gruber <git@drmicha.warpmail.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This test is special for several reasons:
It ends with a "true" statement, which should be a no-op.
It is not because the &&-chain is broken right before it.
Also, looking at what the test intended to test according to
7f578c5 (git-svn: --follow-parent now works on sub-directories of larger
branches, 2007-01-24)
it is not clear how it would achieve that with the given steps.
Amend the test to include the second svn id to be tested for, and
change the tested refs to the ones which are to be expected, and which
make the test pass.
Signed-off-by: Michael J Gruber <git@drmicha.warpmail.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This use of "||" fools --chain-lint into thinking the
&&-chain is broken (and indeed, it is somewhat broken; a
failure of update-index in these tests would show the patch
file, even if we never got to the part of the test where we
fed the patch to git-apply).
The extra blocks were there to include more debugging
output, but it hardly seems worth it; the user should know
which command failed (because git-apply will produce error
messages) and can look in the trash directory themselves.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The ":" noop command always returns true, so it is fine to
include these lines in an &&-chain (and it appeases
--chain-lint).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This test uses single quotes inside the single-quoted test
snippet, which effectively makes the contents unquoted.
Since they don't need quoted anyway, this isn't a problem,
but let's switch them to double-quotes to make it more
obviously correct.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Some of the symlink tests check an either-or case using the
"||". This is not wrong, but fools --chain-lint into
thinking the &&-chain is broken (in fact, there is no &&
chain here).
We can solve this by wrapping the "||" inside a {} block.
This is a bit more verbose, but this construct is rare, and
the {} block helps call attention to it.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The confirmation tests in t9001 all save the value of
sendemail.confirm, do something to it, then restore it at
the end, in a way that breaks the &&-chain (they are not
wrong, because they save the $? value, but it fools
--chain-lint).
Instead, they can all use test_when_finished, and we can
even make the code simpler by factoring out the shared
lines.
Note that we can _almost_ use test_config here, except that:
1. We do not restore the config with test_unconfig, but by
setting it back to some prior value.
2. We are not always setting a config variable. Sometimes
the change to be undone is unsetting it entirely.
We could teach test_config to handle these cases, but it's
not worth the complexity for a single call-site.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We can use test_must_fail and test_path_* to avoid some
hand-rolled if statements. This makes the code shorter, and
makes it more obvious when we are breaking the &&-chain.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
These say roughly the same thing as the hand-rolled
messages. We do lose the "merge did not complete" debug
message, but merge and write-tree are prefectly capable of
writing useful error messages when they fail.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This test contains a lot of hand-rolled messages to show
when the test fails. We can omit most of these by using
"verbose" and "test_must_fail". A few of them are for
update-index, but we can assume it produces reasonable error
messages when it fails.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We can get rid of a lot of hand-rolled error messages by
using test_must_fail and test_expect_code. The existing code
was careful to use "|| return 1" when breaking the
&&-chain, but it did fool --chain-lint; the new code is more
idiomatic.
We also add some uses of test_when_finished, which is less
cryptic and more robust than putting code at the end of a
test. In two cases we run "git bisect reset" from a
subshell, which is a problem for test_when_finished (it
would not run). However, in both of these cases, we are
performing the tests in one-off sub-repos, so we do not need
to clean up at all (and in fact it is nicer not to if the
user wants to inspect the trash directory after a failure).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This script misses a trivial &&-chain in one of its tests,
but it also has a weird reverse: it includes an &&-chain
outside of any test_expect block! This "cat" should never
fail, but if it did, we would not notice, as it would cause
us to skip the follow-on test entirely (which does not
appear intentional; there are many later tests which rely on
this cat).
Let's instead move the setup into its own test_expect_success
block, which is the standard practice nowadays.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
One of these breakages is in setup, but one is more severe
and may miss a real test failure. These are pulled out from
the rest, though, because we also clean up a few other
anachronisms. The most interesting is the use of this
here-doc construct:
(cat >... <<EOF
...
EOF
) &&
It looks like an attempt to make the &&-chaining more
natural by letting it come at the end of the here-doc. But
the extra sub-shell is so non-idiomatic (plus the lack of
"<<-") that it ends up confusing.
Since these are just using a single line, we can accomplish
the same thing with a single printf (which also makes the
use of tab more obvious than the verbatim whitespace).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
As with the earlier patch to fix "trivial" &&-chain
breakage, these missing "&&" operators are not a serious
problem (e.g., we do not expect "echo" to fail).
Ironically, however, inserting them shows that some of the
commands _do_ fail. Specifically, some of the tests start by
making sure we are at a commit with the string "content" in
the file "foo". However, running "git commit" may fail
because the previous test left us in that state already, and
there is nothing to commit.
We could remove these commands entirely, but they serve to
document the test's assumptions, as well as make it robust
when an earlier test has failed. We could use test_might_fail
to handle all cases, but that would miss an unrelated
failure to make the commit. Instead, we can just pass the
--allow-empty flag to git-commit, which means that it will
not complain if our setup is a noop.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The ":" is not a comment marker, but rather a noop command.
Using it as a comment like:
: do something
cmd1 &&
: something else
cmd2
breaks the &&-chain, and we would fail to notice if "cmd1"
failed in this instance. We can just use regular "#"
comments instead.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If we are expecting a command to produce a particular exit
code, we can use test_expect_code. However, some cases are
more complicated, and want to accept one of a range of exit
codes. For these, we end up with something like:
cmd;
case "$?" in
...
That unfortunately breaks the &&-chain and fools
--chain-lint. Since these special cases are so few, we can
wrap them in a block, like this:
{ cmd; ret=$?; } &&
case "$ret" in
...
This accomplishes the same thing, and retains the &&-chain
(the exit status fed to the && is that of the assignment,
which should always be true). It's technically longer, but
it is probably a good thing for unusual code like this to
stand out.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This makes our output in the event of a failure slightly
nicer, and it means that we do not break the &&-chain.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Some tests run diff or grep to produce an output, and then
compare the output to an expected value. We know the exit
code we expect these processes to have (e.g., grep yields 0
if it produced output and 1 otherwise), so it would not make
the test wrong to look for it. But the difference between
their output and the expected output (e.g., shown by
test_cmp) is much more useful to somebody debugging the test
than the test just bailing out.
These tests break the &&-chain to skip the exit-code check
of the process. However, we can get the same effect by using
test_might_fail. Note that in some cases the test did use
"|| return 1", which meant the test was not wrong, but it
did fool --chain-lint.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Many tests have an initial setup step that might fail based
on whether earlier tests in the script have succeeded or
not. Using a trick like "|| true" breaks the &&-chain,
missing earlier failures (and fooling --chain-lint).
We can use test_might_fail in some cases, which is correct
and makes the intent more obvious. We can also use
test_unconfig for unsetting config (and which is more
robust, as well).
The case in t9500 is an oddball. It wants to run cmd1 _or_
cmd2, and does it like:
cmd1 || cmd2 &&
other_stuff
It's not wrong in this case, but it's a bad habit to get
into, because it breaks the &&-chain if used anywhere except
at the beginning of the test (and we use the correct
solution here, putting it inside a block for precedence).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
These test scripts likely predate test_must_fail, and can be
made simpler by using it (in addition to making them pass
--chain-lint).
The case in t6036 loses some verbosity in the failure case,
but it is so tied to a specific failure mode that it is not
worth keeping around (and the outcome of the test is not
affected at all).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Many tests that predate the "verbose" helper function use a
pattern like:
test ... || {
echo ...
false
}
to give more verbose output. Using the helper, we can do
this with a single line, and avoid a || which interacts
badly with &&-chaining (besides fooling --chain-lint, we hit
the error block no matter which command in the chain failed,
so we may often show useless results).
In most cases, the messages printed by "verbose" are equally
good (in some cases better; t6006 accidentally redirects the
message to a file!). The exception is t7001, whose output
suffers slightly. However, it's still enough to show the
user which part failed, given that we will have just printed
the test script to stderr.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Some tests call test_cmp, and if it fails show the actual
output generated. This is mostly pointless, as test_cmp will
already show a diff between the expected and actual output.
It also fools --chain-lint by putting an "||" in the middle
of the chain, so we'd rather not use this construct.
Note that these cases actually show a pre-processed version
of the data, rather than exactly what test_cmp would show.
However, test_cmp's output is generally good for pointing
the user in the right direction, and they can then dig in
the trash directory themselves if they want to see more
details.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
These are tests which are missing a link in their &&-chain,
but during a setup phase. We may fail to notice failure in
commands that build the test environment, but these are
typically not expected to fail at all (but it's still good
to double-check that our test environment is what we
expect).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
These are tests which are missing a link in their &&-chain,
but in a way that probably does not effect the outcome of
the test. Most of these are of the form:
some_cmd >actual
test_cmp expect actual
The main point of the test is to verify the output, and a
failure in some_cmd would probably be noticed by bogus
output. But it is good for the tests to also confirm that
"some_cmd" does not die unexpectedly after producing its
output.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
These are tests which are missing a link in their &&-chain,
in a location which causes a significant portion of the test
to be missed (e.g., the test effectively does nothing, or
consists of a long string of actions and output comparisons,
and we throw away the exit code of at least one part of the
string).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It's easy to miss an "&&"-chain in a test script, like:
test_expect_success 'check something important' '
cmd1 &&
cmd2
cmd3
'
The test harness will notice if cmd3 fails, but a failure of
cmd1 or cmd2 will go unnoticed, as their exit status is lost
after cmd3 runs.
The toy example above is easy to spot because the "cmds" are
all the same length, but real code is much more complicated.
It's also difficult to detect these situations by statically
analyzing the shell code with regexps (like the
check-non-portable-shell script does); there's too much
context required to know whether a &&-chain is appropriate
on a given line or not.
This patch instead lets the shell check each test by
sticking a command with a specific and unusual return code
at the top of each test, like:
(exit 117) &&
cmd1 &&
cmd2
cmd3
In a well-formed test, the non-zero exit from the first
command prevents any of the rest from being run, and the
test's exit code is 117. In a bad test (like the one above),
the 117 is lost, and cmd3 is run.
When we encounter a failure of this check, we abort the test
script entirely. For one thing, we have no clue which subset
of the commands in the test snippet were actually run.
Running further tests would be pointless, because we're now
in an unknown state. And two, this is not a "test failure"
in the traditional sense. The test script is buggy, not the
code it is testing. We should be able to fix these problems
in the script once, and not have them come back later as a
regression in git's code.
After checking a test snippet for --chain-lint, we do still
run the test itself. We could actually have a pure-lint
mode which just checks each test, but there are a few
reasons not to. One, because the tests are executing
arbitrary code, which could impact the later environment
(e.g., that could impact which set of tests we run at all).
And two, because a pure-lint mode would still be expensive
to run, because a significant amount of code runs outside of
the test_expect_* blocks. Instead, this option is designed
to be used as part of a normal test suite run, where it adds
very little overhead.
Turning on this option detects quite a few problems in
existing tests, which will be fixed in subsequent patches.
However, there are a number of places it cannot reach:
- it cannot find a failure to break out of loops on error,
like:
cmd1 &&
for i in a b c; do
cmd2 $i
done &&
cmd3
which will not notice failures of "cmd2 a" or "cmd b"
- it cannot find a missing &&-chain inside a block or
subfunction, like:
foo () {
cmd1
cmd2
}
foo &&
bar
which will not notice a failure of cmd1.
- it only checks tests that you run; every platform will
have some tests skipped due to missing prequisites,
so it's impossible to say from one run that the test
suite is free of broken &&-chains. However, all tests get
run by _somebody_, so eventually we will notice problems.
- it does not operate on test_when_finished or prerequisite
blocks. It could, but these tends to be much shorter and
less of a problem, so I punted on them in this patch.
This patch was inspired by an earlier patch by Jonathan
Nieder:
http://article.gmane.org/gmane.comp.version-control.git/235913
This implementation and all bugs are mine.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
rev-list --bisect is used by git bisect, but never together with
--first-parent. Because rev-list --bisect together with --first-parent
is not handled currently, and even leads to segfaults, refuse to use
both options together.
Because this is not supported, it makes little sense to use git log
--bisect --first parent either, because refs/heads/bad is not limited to
the first parent chain.
Helped-by: Junio C. Hamano <gitster@pobox.com>
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Kevin Daudt <me@ikke.info>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In everything_local(), we used to assign the current ref's value
found in ref->old_sha1 to ref->new_sha1 when we already have all the
necessary objects to complete the history leading to that
commit. This copying was broken at 49bb805e (Do not ask for
objects known to be complete., 2005-10-19) and ever since we
instead stuffed a random bytes in ref->new_sha1 here. No
code complained or failed due to this breakage.
It turns out that no code path that comes after this
assignment even looks at ref->new_sha1 at all.
- The only caller of everything_local(), do_fetch_pack(),
returns this list of refs, whose element has bogus
new_sha1 values, to its caller. It does not look at the
elements itself, but does pass them to find_common, which
looks only at the name and old_sha1 fields.
- The only caller of do_fetch_pack(), fetch_pack(), returns this
list to its caller. It does not look at the elements nor act on
them.
- One of the two callers of fetch_pack() is cmd_fetch_pack(), the
top-level that implements "git fetch-pack". The only thing it
looks at in the elements of the returned ref list is the old_sha1
and name fields.
- The other caller of fetch_pack() is fetch_refs_via_pack() in the
transport layer, which is a helper that implements "git fetch".
It only cares about whether the returned list is empty (i.e.
failed to fetch anything).
Just drop the bogus assignment, that is not even necessary. The
remote-tracking refs are updated based on a different list and not
using the ref list being manipulated by this code path; the caller
do_fetch_pack() created a copy of that real ref list and passed the
copy down to this function, and modifying the elements here does not
affect anything.
Noticed-by: Kyle J. McKay <mackyle@gmail.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When fetch_refs_via_pack calls fetch_pack(), we pass a
list of refs to fetch, and the function returns either a
copy of that list, with the fetched items filled in, or
NULL. We check the return value to see whether the fetch was
successful, but do not otherwise look at the copy, and
simply leak it at the end of the function.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If the server supports allow_tip_sha1_in_want, we add any
unmatched raw-sha1 entries in our "sought" list of refs to
the list of refs we will ask the other side for. We do so by
inserting the original "struct ref" directly into our list,
rather than making a copy. This has several problems.
The most minor problem is that one cannot ever free the
resulting list; it contains structs that are copies of the
remote refs (made earlier by fetch_pack) along with sought
refs that are referenced elsewhere.
But more importantly that we set the ref->next pointer to
NULL, chopping off the remainder of any existing list that
the ref was a part of. We get the set of "sought" refs in
an array rather than a linked list, but that array is often
in turn generated from a list. The test modification in
t5516 demonstrates this. Rather than fetching just an exact
sha1, we fetch that sha1 plus another ref:
- we build a linked list of refs to fetch when do_fetch
calls get_ref_map; the exact sha1 is first, followed by
the named ref ("refs/heads/extra" in this case).
- we pass that linked list to transport_fetch_ref, which
squashes it into an array of pointers
- that array goes to fetch_pack, which calls filter_ref.
There we generate the want list from a mix of what the
remote side has advertised, and the "sought" entry for
the exact sha1. We set the sought entry's "next" pointer
to NULL.
- after we return from transport_fetch_refs, we then try
to update the refs by following the linked list. But our
list is now truncated, and we do not update
refs/heads/extra at all.
We can fix this by making a copy of the ref. There's nothing
that fetch_pack does to it that must be reflected in the
original "sought" list (and indeed, if that were the case we
would have a serious bug, because it is only exact-sha1
entries which are treated this way).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If the server supports allow_tip_sha1_in_want, then
fetch-pack's filter_refs function tries to check whether a
ref is a request for a straight sha1 by running:
if (get_sha1_hex(ref->name, ref->old_sha1))
...
I.e., we are using get_sha1_hex to ask "is this ref name a
sha1?". If it is true, then the contents of ref->old_sha1
will end up unchanged. But if it is false, then get_sha1_hex
makes no guarantees about what it has written. With a ref
name like "abcdefoo", we would overwrite 3 bytes of
ref->old_sha1 before realizing that it was not a sha1.
This is likely not a problem in practice, as anything in
refs->name (besides a sha1) will start with "refs/", meaning
that we would notice on the first character that there is a
problem. Still, we are making assumptions about the state
left in the output when get_sha1_hex returns an error (e.g.,
it could start from the end of the string, or error check
the values only once they were placed in the output). It's
better to be defensive.
We could just check that we have exactly 40 characters of
sha1. But let's be even more careful and make sure that we
have a 40-char hex refname that matches what is in old_sha1.
This is perhaps overly defensive, but spells out our
assumptions clearly.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We do not usually end our errors with a full stop, but it
looks especially bad when you use die_errno, which adds a
colon, like:
fatal: could not create work tree dir 'foo'.: No such file or directory
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If clone fails, we generally try to clean up any directories
we've created. We do this by installing an atexit handler,
so that we don't have to manually trigger cleanup. However,
since we install this after touching the filesystem, any
errors between our initial mkdir() and our atexit() call
will result in us leaving a crufty directory around.
We can fix this by moving our atexit() call earlier. It's OK
to do it before the junk_work_tree variable is set, because
remove_junk makes sure the variable is initialized. This
means we "activate" the handler by assigning to the
junk_work_tree variable, which we now bump down to just
after we call mkdir(). We probably do not want to do it
before, because a plausible reason for mkdir() to fail is
EEXIST (i.e., we are racing with another "git init"), and we
would not want to remove their work.
OTOH, this is probably not that big a deal; we will allow
cloning into an empty directory (and skip the mkdir), which
is already racy (i.e., one clone may see the other's empty
dir and start writing into it). Still, it does not hurt to
err on the side of caution here.
Note that writing into junk_work_tree and junk_git_dir after
installing the handler is also technically racy, as we call
our handler on an async signal. Depending on the platform,
we could see a sheared write to the variables. Traditionally
we have not worried about this, and indeed we already do
this later in the function. If we want to address that, it
can come as a separate topic.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Right now we return a NULL "struct sha1file" if we encounter
an error. However, the sole caller (write_idx_file) does not
check the return value, and will segfault if we hit this
case.
One option would be to handle the error in the caller.
However, there's really nothing for it to do but die. This
code path is hit during "git index-pack --verify"; after we
verify the packfile, we check that the ".idx" we would
generate from it is byte-wise identical to what is on disk.
We hit the error (and segfault) if we can't open the .idx
file (a likely cause of this is that somebody else ran "git
repack -ad" while we were verifying). Since we can't
complete the requested verification, we really have no
choice but to die.
Furthermore, the rest of the sha1fd_* functions simply die
on errors. So if were to open the file successfully, for
example, and then hit a read error, sha1write would call
die() for us. So pushing the die() down into sha1fd_check
keeps the interface consistent.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When grep is called with the --quiet option, the pager is initialized
despite not being used. When the pager is "less", anything output by
previous commands and not ended with a newline is overwritten:
$ echo -n aaa; echo bbb
aaabbb
$ echo -n aaa; git grep -q foo; echo bbb
bbb
This can be worked around, for example, by making sure STDOUT is not a
TTY or more directly by setting git's pager to "cat":
$ echo -n aaa; git grep -q foo > /dev/null; echo bbb
aaabbb
$ echo -n aaa; PAGER=cat git grep -q foo; echo bbb
aaabbb
But prevent calling the pager in the first place, which would also
save an unnecessary fork().
Signed-off-by: Wilhelm Schuermann <wimschuermann@googlemail.com>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Because "--graph" is about connected history while --no-walk is
about discrete points, it does not make sense to allow these two
options at the same time. [1]
This change makes a few calls to "show --graph" fail in t4052, but
asking to show one commit with graph is a nonsensical thing to do.
Thus, tests on "show --graph" in t4052 have been removed [2,3].
Same tests on "show" without --graph option have already been tested
in 4052.
3 testcases have been added to test this patch.
[1]: http://article.gmane.org/gmane.comp.version-control.git/216083
[2]: http://article.gmane.org/gmane.comp.version-control.git/264950
[3]: http://article.gmane.org/gmane.comp.version-control.git/265107
Helped-By: Eric Sunshine <sunshine@sunshineco.com>
Helped-By: René Scharfe <l.s.r@web.de>
Helped-By: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Dongcan Jiang <dongcan.jiang@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git status" now allows the "-v" to be given twice to show the
differences that are left in the working tree not to be committed.
* mg/status-v-v:
commit/status: show the index-worktree diff with -v -v
t7508: test git status -v
t7508: .gitignore 'expect' and 'output' files
"git cherry-pick" used to clean-up the log message even when it is
merely replaying an existing commit. It now replays the message
verbatim unless you are editing the message of resulting commits.
* mg/sequencer-commit-messages-always-verbatim:
sequencer: preserve commit messages
"git rebase -i" recently started to include the number of
commits in the insn sheet to be processed, but on a platform
that prepends leading whitespaces to "wc -l" output, the numbers
are shown with extra whitespaces that aren't necessary.
* es/rebase-i-count-todo:
rebase-interactive: re-word "item count" comment
rebase-interactive: suppress whitespace preceding item count
In step "(4) Sending your patches", we instruct users to do an
inline patch, avoid breaking whitespaces, avoid attachments, use
[PATCH v2] for second round, etc., all of which format-patch and
send-email combo know how to do well.
The need was identified by, and the text is based on the work by
Cody Taylor.
Suggested-by: Cody Taylor <cody.taylor@maternityneighborhood.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The __git_ps1() prompt function would not show an untracked state
when all the untracked files are outside the current working
directory.
Signed-off-by: Cody A Taylor <codemister99@yahoo.com>
Helped-by: SZEDER Gábor <szeder@ira.uka.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>