In many test scripts, there are bespoke definitions of the single quote
that are some variation of this:
SQ="'"
Define a common $SQ variable in test-lib.sh and replace all usages of
these bespoke variables with the common one.
This change was done by running `git grep =\"\'\" t/` and
`git grep =\\\\\'` and manually changing the resulting definitions and
corresponding usages.
Signed-off-by: Denton Liu <liu.denton@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In files_transaction_prepare(), if we have to delete some refs, we use a
subordinate packed_transaction to do so. It's rare for that
sub-transaction's prepare step to fail, since we hold the packed-refs
lock. But if it does, we trigger a BUG() due to these steps:
- we've attached the packed transaction to the files transaction as
backend_data->packed_transaction
- when the prepare step fails, the packed transaction cleans itself
up, putting itself into the CLOSED state
- the error value from preparing the packed transaction lets us know
in files_transaction_prepare() that we should also clean up and
return an error. We call files_transaction_cleanup(), which tries to
abort backend_data->packed_transaction. Since it's already CLOSED,
that triggers an assertion in ref_transaction_abort().
We can fix that by disconnecting the packed transaction from the outer
files transaction, and then free-ing (not aborting!) it ourselves.
A few other options/alternatives I considered:
- we could just make it a noop to abort a CLOSED transaction. But that
seems less safe, since clearly this code expects (and enforces) a
particular set of state transitions.
- we could have files_transaction_cleanup() selectively call abort()
vs free() based on the state of the on the packed transaction.
That's basically a more restricted version of the above, but also
potentially unsafe.
- instead of disconnecting backend_data->packed_transaction on error,
we could wait to install it until we successfully prepare. That
might make the flow a little simpler, but it introduces a hassle.
Earlier parts of files_transaction_prepare() that encounter an error
will jump to the cleanup label, and expect that cleaning up the
outer transaction will clean up the packed transaction, too. We'd
have to adjust those sites to clean up the packed transaction.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Not even in C locale, it is wrong to expect that the exact phrasing
"File exists" is used to show EEXIST.
Reported-by: Randall S. Becker <rsbecker@nexbridge.com>
Helped-by: Duy Nguyen <pclouds@gmail.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The test 'no bogus intermediate values during delete' in
't1404-update-ref-errors.sh', added in 6a2a7736d8 (t1404: demonstrate
two problems with reference transactions, 2017-09-08), tries to catch
undesirable side effects of deleting a ref, both loose and packed, in
a transaction. To do so it is holding the packed refs file locked
when it starts 'git update-ref -d' in the background with a 3secs
'core.packedRefsTimeout' value. After performing a few checks it is
then supposed to unlock the packed refs file before the background
'git update-ref's attempt to acquire the lock times out.
While 3secs timeout seems plenty, and indeed is sufficient in most
cases, on rare occasions it's just not quite enough: I saw this test
fail in Travis CI build jobs two, maybe three times because 'git
update-ref' timed out.
Increase that timeout by an order of magnitude to 30s to make such an
occasional failure even more improbable. This won't make the test run
any longer under normal circumstances, because 'git update-ref' will
acquire the lock and resume execution as soon as it can. And if it
turns out that even this increased timeout is still not enough, then
there are most likely bigger problems, e.g. the Travis CI build job
will exceed its time limit anyway, or the lockfile module is broken.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since dc39e09942 (files_ref_store: use a transaction to update packed
refs, 2017-09-08), failure to lock a reference has been handled
incorrectly by `files_transaction_prepare()`. If
`lock_ref_for_update()` fails in the lock-acquisition loop of that
function, it sets `ret` then breaks out of that loop. Prior to
dc39e09942, that was OK, because the only thing following the loop was
the cleanup code. But dc39e09942 added another blurb of code between
the loop and the cleanup. That blurb sometimes resets `ret` to zero,
making the cleanup code think that the locking was successful.
Specifically, whenever
* One or more reference deletions have been processed successfully in
the lock-acquisition loop. (Processing the first such reference
causes a packed-ref transaction to be initialized.)
* Then `lock_ref_for_update()` fails for a subsequent reference. Such
a failure can happen for a number of reasons, such as the old SHA-1
not being correct, lock contention, etc. This causes a `break` out
of the lock-acquisition loop.
* The `packed-refs` lock is acquired successfully and
`ref_transaction_prepare()` succeeds for the packed-ref transaction.
This has the effect of resetting `ret` back to 0, and making the
cleanup code think that lock acquisition was successful.
In that case, any reference updates that were processed prior to
breaking out of the loop would be carried out (loose and packed), but
the reference that couldn't be locked and any subsequent references
would silently be ignored.
This can easily cause data loss if, for example, the user was trying
to push a new name for an existing branch while deleting the old name.
After the push, the branch could be left unreachable, and could even
subsequently be garbage-collected.
This problem was noticed in the context of deleting one reference and
creating another in a single transaction, when the two references D/F
conflict with each other, like
git update-ref --stdin <<EOF
delete refs/foo
create refs/foo/bar HEAD
EOF
This triggers the above bug because the deletion is processed
successfully for `refs/foo`, then the D/F conflict causes
`lock_ref_for_update()` to fail when `refs/foo/bar` is processed. In
this case the transaction *should* fail, but instead it causes
`refs/foo` to be deleted without creating `refs/foo`. This could
easily result in data loss.
The fix is simple: instead of just breaking out of the loop, jump
directly to the cleanup code. This fixes some tests in t1404 that were
added in the previous commit.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It is currently not allowed, in a single transaction, to add one
reference and delete another reference if the two reference names D/F
conflict with each other (e.g., like `refs/foo/bar` and `refs/foo`).
The reason is that the code would need to take locks
$GIT_DIR/refs/foo.lock
$GIT_DIR/refs/foo/bar.lock
But the latter lock couldn't coexist with the loose reference file
$GIT_DIR/refs/foo
, because `$GIT_DIR/refs/foo` cannot be both a directory and a file at
the same time (hence the name "D/F conflict).
Add a bunch of tests that we cleanly reject such transactions.
In fact, many of the new tests currently fail. They will be fixed in
the next commit along with an explanation.
Reported-by: Jeff King <peff@peff.net>
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When processing a `files_ref_store` transaction, it is sometimes
necessary to delete some references from the "packed-refs" file. Do
that using a reference transaction conducted against the
`packed_ref_store`.
This change further decouples `files_ref_store` from
`packed_ref_store`. It also fixes multiple problems, including the two
revealed by test cases added in the previous commit.
First, the old code didn't obtain the `packed-refs` lock until
`files_transaction_finish()`. This means that a failure to acquire the
`packed-refs` lock (e.g., due to contention with another process)
wasn't detected until it was too late (problems like this are supposed
to be detected in the "prepare" phase). The new code acquires the
`packed-refs` lock in `files_transaction_prepare()`, the same stage of
the processing when the loose reference locks are being acquired,
removing another reason why the "prepare" phase might succeed and the
"finish" phase might nevertheless fail.
Second, the old code deleted the loose version of a reference before
deleting any packed version of the same reference. This left a moment
when another process might think that the packed version of the
reference is current, which is incorrect. (Even worse, the packed
version of the reference can be arbitrarily old, and might even point
at an object that has since been garbage-collected.)
Third, if a reference deletion fails to acquire the `packed-refs` lock
altogether, then the old code might leave the repository in the
incorrect state (possibly corrupt) described in the previous
paragraph.
Now we activate the new "packed-refs" file (sans any references that
are being deleted) *before* deleting the corresponding loose
references. But we hold the "packed-refs" lock until after the loose
references have been finalized, thus preventing a simultaneous
"pack-refs" process from packing the loose version of the reference in
the time gap, which would otherwise defeat our attempt to delete it.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Currently, a loose reference is deleted even before locking the
`packed-refs` file, let alone deleting any packed version of the
reference. This leads to two problems, demonstrated by two new tests:
* While a reference is being deleted, other processes might see the
old, packed value of the reference for a moment before the packed
version is deleted. Normally this would be hard to observe, but we
can prolong the window by locking the `packed-refs` file externally
before running `update-ref`, then unlocking it before `update-ref`'s
attempt to acquire the lock times out.
* If the `packed-refs` file is locked so long that `update-ref` fails
to lock it, then the reference can be left permanently in the
incorrect state described in the previous point.
In a moment, both problems will be fixed.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
To aid the effort, extract a new function, check_old_oid(), and use it
in the two places where the read value of the reference has to be
checked against update->old_sha1.
Update tests to reflect the improvements.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Some of the error messages will be improved in subsequent commits.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The tests already set a variable called prefix and passed its value as
the first argument to this function. The old argument handling was
overwriting the global variable with its same value rather than creating
a local variable.
So change test_update_rejected to refer to the global variable rather
than taking the prefix as an argument.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
I want to broaden the scope of this test file, so rename it accordingly.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>