Since efb779f (merge, pull: add '--(no-)log' command line option,
2008-04-06) git-pull supported the (--no-)log switch and would pass it
to git-merge.
96e9420 (merge: Make '--log' an integer option for number of shortlog
entries, 2010-09-08) implemented support for the --log=<n> switch, which
would explicitly set the number of shortlog entries. However, git-pull
does not recognize this option, and will instead pass it to git-fetch,
leading to "unknown option" errors.
Fix this by matching --log=* in addition to --log and --no-log.
Implement a test for this use case.
Signed-off-by: Paul Tan <pyokagan@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
`git add` of an empty file with a filter pops complaints from
`copy_fd` about a bad file descriptor.
This traces back to these lines in sha1_file.c:index_core:
if (!size) {
ret = index_mem(sha1, NULL, size, type, path, flags);
The problem here is that content to be added to the index can be
supplied from an fd, or from a memory buffer, or from a pathname. This
call is supplying a NULL buffer pointer and a zero size.
Downstream logic takes the complete absence of a buffer to mean the
data is to be found elsewhere -- for instance, these, from convert.c:
if (params->src) {
write_err = (write_in_full(child_process.in, params->src, params->size) < 0);
} else {
write_err = copy_fd(params->fd, child_process.in);
}
~If there's a buffer, write from that, otherwise the data must be coming
from an open fd.~
Perfectly reasonable logic in a routine that's going to write from
either a buffer or an fd.
So change `index_core` to supply an empty buffer when indexing an empty
file.
There's a patch out there that instead changes the logic quoted above to
take a `-1` fd to mean "use the buffer", but it seems to me that the
distinction between a missing buffer and an empty one carries intrinsic
semantics, where the logic change is adapting the code to handle
incorrect arguments.
Signed-off-by: Jim Hill <gjthill@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we die() in http-backend, we call a custom handler that
writes an HTTP 500 response to stdout, then reports the
error to stderr. Our routines for writing out the HTTP
response may themselves die, leading to us entering die()
again.
When it was originally written, that was OK; our custom
handler keeps a variable to notice this and does not
recurse. However, since cd163d4 (usage.c: detect recursion
in die routines and bail out immediately, 2012-11-14), the
main die() implementation detects recursion before we even
get to our custom handler, and bails without printing
anything useful.
We can handle this case by doing two things:
1. Installing a custom die_is_recursing handler that
allows us to enter up to one level of recursion. Only
the first call to our custom handler will try to write
out the error response. So if we die again, that is OK.
If we end up dying more than that, it is a sign that we
are in an infinite recursion.
2. Reporting the error to stderr before trying to write
out the HTTP response. In the current code, if we do
die() trying to write out the response, we'll exit
immediately from this second die(), and never get a
chance to output the original error (which is almost
certainly the more interesting one; the second die is
just going to be along the lines of "I tried to write
to stdout but it was closed").
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If you run "git rerere forget foo" in a repository that does
not have rerere enabled, git hits an internal error:
$ git init -q
$ git rerere forget foo
fatal: BUG: attempt to commit unlocked object
The problem is that setup_rerere() will not actually take
the lock if the rerere system is disabled. We should notice
this and return early. We can return with a success code
here, because we know there is nothing to forget.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since 441ed41 ("git pull --tags": error out with a better message.,
2007-12-28), git pull --tags would print a different error message if
git-fetch did not return any merge candidates:
It doesn't make sense to pull all tags; you probably meant:
git fetch --tags
This is because at that time, git-fetch --tags would override any
configured refspecs, and thus there would be no merge candidates. The
error message was thus introduced to prevent confusion.
However, since c5a84e9 (fetch --tags: fetch tags *in addition to*
other stuff, 2013-10-30), git fetch --tags would fetch tags in addition
to any configured refspecs. Hence, if any no merge candidates situation
occurs, it is not because --tags was set. As such, this special error
message is now irrelevant.
To prevent confusion, remove this error message.
Signed-off-by: Paul Tan <pyokagan@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The former seems to just be syntactic sugar for the latter.
And as it's sugar that AsciiDoctor doesn't understand, it
would be nice to avoid it. Since there are only two spots,
and the resulting source is not significantly harder to
read, it's worth doing.
Note that this does slightly affect the generated HTML (it
has an extra newline), but the rendered result for both HTML
and docbook should be the same (since the newline is not
syntactically significant there).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git cat-file bl $blob" failed to barf even though there is no
object type that is "bl".
* jk/type-from-string-gently:
type_from_string_gently: make sure length matches
Teach the codepaths that read .gitignore and .gitattributes files
that these files encoded in UTF-8 may have UTF-8 BOM marker at the
beginning; this makes it in line with what we do for configuration
files already.
* cn/bom-in-gitignore:
attr: skip UTF8 BOM at the beginning of the input file
config: use utf8_bom[] from utf.[ch] in git_parse_source()
utf8-bom: introduce skip_utf8_bom() helper
add_excludes_from_file: clarify the bom skipping logic
dir: allow a BOM at the beginning of exclude files
Access to objects in repositories that borrow from another one on a
slow NFS server unnecessarily got more expensive due to recent code
becoming more cautious in a naive way not to lose objects to pruning.
* jk/prune-mtime:
sha1_file: only freshen packs once per run
sha1_file: freshen pack objects before loose
reachable: only mark local objects as recent
We avoid setting core.worktree when the repository location is the
".git" directory directly at the top level of the working tree, but
the code misdetected the case in which the working tree is at the
root level of the filesystem (which arguably is a silly thing to
do, but still valid).
* jk/init-core-worktree-at-root:
init: don't set core.worktree when initializing /.git
The DECORATE_SHORT_REFS option given to load_ref_decorations()
affects the way a copy of the refname is stored for each decorated
commit, and this forces later steps like current_pointed_by_HEAD()
to adjust their behaviour based on this initial settings.
Instead, we can always store the full refname and then shorten them
when producing the output.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The previous step to teach "log --decorate" to show "HEAD -> master"
instead of "HEAD, master" when showing the commit at the tip of the
'master' branch, when the 'master' branch is checked out, did not
work for "log --decorate=full".
The commands in the "log" family prepare commit decorations for all
refs upfront, and the actual string used in a decoration depends on
how load_ref_decorations() is called very early in the process. By
default, "git log --decorate" stores names with common prefixes such
as "refs/heads" stripped; "git log --decorate=full" stores the full
refnames.
When the current_pointed_by_HEAD() function has to decide if "HEAD"
points at the branch a decoration describes, however, what was
passed to load_ref_decorations() to decide to strip (or keep) such a
common prefix is long lost. This makes it impossible to reliably
tell if a decoration that stores "refs/heads/master", for example,
is the 'master' branch (under "--decorate" with prefix omitted) or
'refs/heads/master' branch (under "--decorate=full").
Keep what was passed to load_ref_decorations() in a global next to
the global variable name_decoration, and use that to decide how to
match what was read from "HEAD" and what is in a decoration.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This makes sure that AsciiDoc does not turn them into links.
Regular AsciiDoc does not catch these cases, but AsciiDoctor
does treat them as links.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Text like "{foo}" triggers an AsciiDoc attribute; we have to
write "\{foo}" to suppress this. But when the "foo" is not a
syntactically valid attribute, we can skip the quoting. This
makes the source nicer to read, and looks better under
Asciidoctor. With AsciiDoc itself, this patch produces no
changes.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Older versions of AsciiDoc would convert the "--" in
"--option" into an emdash. According to 565e135
(Documentation: quote double-dash for AsciiDoc, 2011-06-29),
this is fixed in AsciiDoc 8.3.0. According to bf17126, we
don't support anything older than 8.4.1 anyway, so we no
longer need to worry about quoting.
Even though this does not change the output at all, there
are a few good reasons to drop the quoting:
1. It makes the source prettier to read.
2. We don't quote consistently, which may be confusing when
reading the source.
3. Asciidoctor does not like the quoting, and renders a
literal backslash.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
All of the other options in the list put short and long as
two separate headings.
We can also drop the backslashing of `--`. It isn't used
elsewhere and is unnecessary for modern asciidoc (plus it
confuses asciidoctor).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In AsciiDoc, it is OK to say:
this is my title
-------------------------
but AsciiDoctor is more strict. Let's match the underline to
the title (which also makes the source prettier to read).
The output from AsciiDoc is the same either way.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In list content that wants to continue to a second
paragraph, the "+" continuation and subsequent paragraph
need to be left-aligned. Otherwise AsciiDoc seems to insert
only a linebreak.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Curly braces open an "attribute" in AsciiDoc; if there's no
such attribute, strange things may happen. In this case, the
unquoted "{type}" causes AsciiDoc to omit an entire line of
text from the output. We can fix it by putting the whole
phrase inside literal backticks (which also lets us get rid
of ugly backslash escaping).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
AsciiDoc misparses some text that contains a `literal`
word followed by a fancy `single quote' word, and treats
everything from the start of the literal to the end of the
quote as a single-quoted phrase.
We can work around this by switching the latter to be a
literal, as well. In the first case, this is perhaps what
was intended anyway, as it makes us consistent with the the
earlier literals in the same paragraph. In the second, the
output is arguably better, as we will format our commit
references as <code> blocks.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The asciidoctor renderer is more picky than classic asciidoc,
and insists that the start and end of a code fence be the
same size.
Found with this hacky perl script:
foreach my $fn (@ARGV) {
open(my $fh, '<', $fn);
my ($fence, $fence_lineno, $prev);
while (<$fh>) {
chomp;
if (/^----+$/) {
if ($fence_lineno) {
if ($_ ne $fence) {
print "$fn:$fence_lineno:mismatched fence: ",
length($fence), " != ", length($_), "\n";
}
$fence_lineno = undef;
}
# hacky check to avoid title-underlining
elsif ($prev eq '' || $prev eq '+') {
$fence = $_;
$fence_lineno = $.;
}
}
$prev = $_;
}
}
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* mh/write-refs-sooner-2.3:
ref_transaction_commit(): fix atomicity and avoid fd exhaustion
ref_transaction_commit(): remove the local flags variable
ref_transaction_commit(): inline call to write_ref_sha1()
rename_ref(): inline calls to write_ref_sha1() from this function
commit_ref_update(): new function, extracted from write_ref_sha1()
write_ref_to_lockfile(): new function, extracted from write_ref_sha1()
t7004: rename ULIMIT test prerequisite to ULIMIT_STACK_SIZE
update-ref: test handling large transactions properly
The old code was roughly
for update in updates:
acquire locks and check old_sha
for update in updates:
if changing value:
write_ref_to_lockfile()
commit_ref_update()
for update in updates:
if deleting value:
unlink()
rewrite packed-refs file
for update in updates:
if reference still locked:
unlock_ref()
This has two problems.
Non-atomic updates
==================
The atomicity of the reference transaction depends on all pre-checks
being done in the first loop, before any changes have started being
committed in the second loop. The problem is that
write_ref_to_lockfile() (previously part of write_ref_sha1()), which
is called from the second loop, contains two more checks:
* It verifies that new_sha1 is a valid object
* If the reference being updated is a branch, it verifies that
new_sha1 points at a commit object (as opposed to a tag, tree, or
blob).
If either of these checks fails, the "transaction" is aborted during
the second loop. But this might happen after some reference updates
have already been permanently committed. In other words, the
all-or-nothing promise of "git update-ref --stdin" could be violated.
So these checks have to be moved to the first loop.
File descriptor exhaustion
==========================
The old code locked all of the references in the first loop, leaving
all of the lockfiles open until later loops. Since we might be
updating a lot of references, this could result in file descriptor
exhaustion.
The solution
============
After this patch, the code looks like
for update in updates:
acquire locks and check old_sha
if changing value:
write_ref_to_lockfile()
else:
close_ref()
for update in updates:
if changing value:
commit_ref_update()
for update in updates:
if deleting value:
unlink()
rewrite packed-refs file
for update in updates:
if reference still locked:
unlock_ref()
This fixes both problems:
1. The pre-checks in write_ref_to_lockfile() are now done in the first
loop, before any changes have been committed. If any of the checks
fails, the whole transaction can now be rolled back correctly.
2. All lockfiles are closed in the first loop immediately after they
are created (either by write_ref_to_lockfile() or by close_ref()).
This means that there is never more than one open lockfile at a
time, preventing file descriptor exhaustion.
To simplify the bookkeeping across loops, add a new REF_NEEDS_COMMIT
bit to update->flags, which keeps track of whether the corresponding
lockfile needs to be committed, as opposed to just unlocked. (Since
"struct ref_update" is internal to the refs module, this change is not
visible to external callers.)
This change fixes two tests in t1400.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Instead, work directly with update->flags. This has the advantage that
the REF_DELETING bit, set in the first loop, can be read in the second
loop instead of having to be recomputed. Plus, it was potentially
confusing having both update->flags and flags, which sometimes had
different values.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
That was the last caller, so delete function write_ref_sha1().
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Most of what it does is unneeded from these call sites.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is the first step towards separating the checking and writing of
the new reference value to committing the change.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
During creation of the patch series our discussion we could have a
more descriptive name for the prerequisite for the test so it stays
unique when other limits of ulimit are introduced.
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* mh/write-refs-sooner-2.2:
ref_transaction_commit(): fix atomicity and avoid fd exhaustion
ref_transaction_commit(): remove the local flags variable
ref_transaction_commit(): inline call to write_ref_sha1()
rename_ref(): inline calls to write_ref_sha1() from this function
commit_ref_update(): new function, extracted from write_ref_sha1()
write_ref_to_lockfile(): new function, extracted from write_ref_sha1()
t7004: rename ULIMIT test prerequisite to ULIMIT_STACK_SIZE
update-ref: test handling large transactions properly
The old code was roughly
for update in updates:
acquire locks and check old_sha
for update in updates:
if changing value:
write_ref_to_lockfile()
commit_ref_update()
for update in updates:
if deleting value:
unlink()
rewrite packed-refs file
for update in updates:
if reference still locked:
unlock_ref()
This has two problems.
Non-atomic updates
==================
The atomicity of the reference transaction depends on all pre-checks
being done in the first loop, before any changes have started being
committed in the second loop. The problem is that
write_ref_to_lockfile() (previously part of write_ref_sha1()), which
is called from the second loop, contains two more checks:
* It verifies that new_sha1 is a valid object
* If the reference being updated is a branch, it verifies that
new_sha1 points at a commit object (as opposed to a tag, tree, or
blob).
If either of these checks fails, the "transaction" is aborted during
the second loop. But this might happen after some reference updates
have already been permanently committed. In other words, the
all-or-nothing promise of "git update-ref --stdin" could be violated.
So these checks have to be moved to the first loop.
File descriptor exhaustion
==========================
The old code locked all of the references in the first loop, leaving
all of the lockfiles open until later loops. Since we might be
updating a lot of references, this could result in file descriptor
exhaustion.
The solution
============
After this patch, the code looks like
for update in updates:
acquire locks and check old_sha
if changing value:
write_ref_to_lockfile()
else:
close_ref()
for update in updates:
if changing value:
commit_ref_update()
for update in updates:
if deleting value:
unlink()
rewrite packed-refs file
for update in updates:
if reference still locked:
unlock_ref()
This fixes both problems:
1. The pre-checks in write_ref_to_lockfile() are now done in the first
loop, before any changes have been committed. If any of the checks
fails, the whole transaction can now be rolled back correctly.
2. All lockfiles are closed in the first loop immediately after they
are created (either by write_ref_to_lockfile() or by close_ref()).
This means that there is never more than one open lockfile at a
time, preventing file descriptor exhaustion.
To simplify the bookkeeping across loops, add a new REF_NEEDS_COMMIT
bit to update->flags, which keeps track of whether the corresponding
lockfile needs to be committed, as opposed to just unlocked. (Since
"struct ref_update" is internal to the refs module, this change is not
visible to external callers.)
This change fixes two tests in t1400.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Instead, work directly with update->flags. This has the advantage that
the REF_DELETING bit, set in the first loop, can be read in the second
loop instead of having to be recomputed. Plus, it was potentially
confusing having both update->flags and flags, which sometimes had
different values.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
And remove the function write_ref_sha1(), as it is no longer used.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Most of what it does is unneeded from these call sites.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is the first step towards separating the checking and writing of
the new reference value to committing the change.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
During creation of the patch series, our discussion revealed that
we could have a more descriptive name for the prerequisite for the
test so it stays unique when other limits of ulimit are introduced.
Let's rename the existing ulimit about setting the stack size to
a more explicit ULIMIT_STACK_SIZE.
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When running "add -e", if launching the editor fails, we do
not notice and continue as if the output is what the user
asked for. The likely case is that the editor did not touch
the contents at all, and we end up adding everything.
Reported-by: Russ Cox <rsc@golang.org>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>