core.excludesfile (defaulting to $XDG_HOME/git/ignore) is supposed
to be overridden by repository-specific .git/info/exclude file, but
the order was swapped from the beginning. This belatedly fixes it.
* jc/gitignore-precedence:
ignore: info/exclude should trump core.excludesfile
After "git add -N", the path appeared in output of "git diff HEAD"
and "git diff --cached HEAD", leading "git status" to classify it
as "Changes to be committed". Such a path, however, is not yet to
be scheduled to be committed. "git diff" showed the change to the
path as modification, not as a "new file", in the header of its
output.
Treat such paths as "yet to be added to the index but Git already
know about them"; "git diff HEAD" and "git diff --cached HEAD"
should not talk about them, and "git diff" should show them as new
files yet to be added to the index.
* nd/diff-i-t-a:
diff-lib.c: adjust position of i-t-a entries in diff
progress: treat "no terminal" as being in the foreground
Commit 85cb890 (progress: no progress in background,
2015-04-13) avoids sending progress from background
processes by checking that the process group id of the
current process is the same as that of the controlling
terminal.
If we don't have a terminal, however, this check never
succeeds, and we print no progress at all (until the final
"done" message). This can be seen when cloning a large
repository; instead of getting progress updates for
"counting objects", it will appear to hang then print the
final count.
We can fix this by treating an error return from tcgetpgrp()
as a signal to show the progress.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Use an automatic variable for recent_bitmaps, an array of pointers.
This way we don't allocate too much and don't have to free the memory
at the end. The old code over-allocated because it reserved enough
memory to store all of the structs it is only pointing to and never
freed it. 160 64-bit pointers take up 1280 bytes, which is not too
much to be placed on the stack.
MAX_XOR_OFFSET is turned into a preprocessor constant to make it
constant enough for use in an non-variable array declaration.
Noticed-by: Stefan Beller <stefanbeller@gmail.com>
Suggested-by: Jeff King <peff@peff.net>
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit 2879bc3 made the progress and verbosity options sent to remote helper
earlier than they previously were. But nothing else after that would send
updates if the value is changed later on with transport_set_verbosity.
While for fetch and push, transport_set_verbosity is the first thing that
is done after creating the transport, it was not the case for clone. So
commit 2879bc3 broke changing progress and verbosity for clone, for urls
requiring a remote helper only (so, not git:// urls, for instance).
Moving transport_set_verbosity to just after the transport is created
works around the issue.
Signed-off-by: Mike Hommey <mh@glandium.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Back when these tests were written, we wanted to make sure that Git
notices it is in a bare repository and "git show -s HEAD" would
refrain from complaining that HEAD might mean a file it sees in its
current working directory (because it does not). But the version of
Git back then didn't behave well, without (doubly) being told that
it is inside a bare repository by exporting "GIT_DIR=.". The form
of the test we originally wanted to have was left commented out as
a reminder.
Nowadays the test as originally intended works, so add it to the
test suite. We'll keep the old test that explicitly sets GIT_DIR=.
to make sure that use case will not regress.
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Even though "git clean" takes pathspec to limit the part of the
working tree to be cleaned, it checked the paths it encounters
during its directory traversal with lstat(2), before checking if
the path is within the pathspec.
Ignore paths outside pathspec and proceed without checking with
lstat(2). Even if such a path is unreadable due to e.g. EPERM,
"git clean" should not care.
Signed-off-by: David Turner <dturner@twopensource.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
There are several "raw formats", and describing --raw as "Generate the
raw format" in the documentation for git-log seems to imply that it
generates the raw *log* format.
Clarify the wording by saying "raw diff format" explicitly, and make a
special-case for "git log": "git log --raw" does not just change the
format, it shows something which is not shown by default.
Signed-off-by: Matthieu Moy <Matthieu.Moy@imag.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since b814da8 (pull: add pull.ff configuration, 2014-01-15) git-pull
supported setting --(no-)ff via the pull.ff configuration value.
However, as it only matches the string values of "true" and "false", it
does not support other boolean aliases such as "on", "off", "1", "0".
This is inconsistent with the merge.ff setting, which supports these
aliases.
Fix this by using the bool_or_string_config function to retrieve the
value of pull.ff.
Signed-off-by: Paul Tan <pyokagan@gmail.com>
Reviewed-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since b814da8 (pull: add pull.ff configuration, 2014-01-15), running
git-pull with the configuration pull.ff=false or pull.ff=only is
equivalent to passing --no-ff and --ff-only to git-merge. However, if
pull.ff=true, no switch is passed to git-merge. This leads to the
confusing behavior where pull.ff=false or pull.ff=only is able to
override merge.ff, while pull.ff=true is unable to.
Fix this by adding the --ff switch if pull.ff=true, and add a test to
catch future regressions.
Furthermore, clarify in the documentation that pull.ff overrides
merge.ff.
Signed-off-by: Paul Tan <pyokagan@gmail.com>
Reviewed-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since efb779f (merge, pull: add '--(no-)log' command line option,
2008-04-06) git-pull supported the (--no-)log switch and would pass it
to git-merge.
96e9420 (merge: Make '--log' an integer option for number of shortlog
entries, 2010-09-08) implemented support for the --log=<n> switch, which
would explicitly set the number of shortlog entries. However, git-pull
does not recognize this option, and will instead pass it to git-fetch,
leading to "unknown option" errors.
Fix this by matching --log=* in addition to --log and --no-log.
Implement a test for this use case.
Signed-off-by: Paul Tan <pyokagan@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
a8c9bef (pull: improve advice for unconfigured error case, 2009-10-05)
fully established the current advices given by git-pull for the
different cases where git-fetch will not have anything marked for merge:
1. We fetched from a specific remote, and a refspec was given, but it
ended up not fetching anything. This is usually because the user
provided a wildcard refspec which had no matches on the remote end.
2. We fetched from a non-default remote, but didn't specify a branch to
merge. We can't use the configured one because it applies to the
default remote, and thus the user must specify the branches to merge.
3. We fetched from the branch's or repo's default remote, but:
a. We are not on a branch, so there will never be a configured branch
to merge with.
b. We are on a branch, but there is no configured branch to merge
with.
4. We fetched from the branch's or repo's default remote, but the
configured branch to merge didn't get fetched (either it doesn't
exist, or wasn't part of the configured fetch refspec)
Implement tests for the above 5 cases to ensure that the correct code
paths are triggered for each of these cases.
Signed-off-by: Paul Tan <pyokagan@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Many tests in t5520 used the following to test the contents of files:
test `cat file` = expected
or
test $(cat file) = expected
These 2 forms, however, will be affected by field splitting and,
depending on the value of $IFS, may be split into multiple arguments,
making the test fail in mysterious ways.
Replace the above 2 forms with:
test "$(cat file)" = expected
as quoting the command substitution will prevent field splitting.
Signed-off-by: Paul Tan <pyokagan@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
`git add` of an empty file with a filter pops complaints from
`copy_fd` about a bad file descriptor.
This traces back to these lines in sha1_file.c:index_core:
if (!size) {
ret = index_mem(sha1, NULL, size, type, path, flags);
The problem here is that content to be added to the index can be
supplied from an fd, or from a memory buffer, or from a pathname. This
call is supplying a NULL buffer pointer and a zero size.
Downstream logic takes the complete absence of a buffer to mean the
data is to be found elsewhere -- for instance, these, from convert.c:
if (params->src) {
write_err = (write_in_full(child_process.in, params->src, params->size) < 0);
} else {
write_err = copy_fd(params->fd, child_process.in);
}
~If there's a buffer, write from that, otherwise the data must be coming
from an open fd.~
Perfectly reasonable logic in a routine that's going to write from
either a buffer or an fd.
So change `index_core` to supply an empty buffer when indexing an empty
file.
There's a patch out there that instead changes the logic quoted above to
take a `-1` fd to mean "use the buffer", but it seems to me that the
distinction between a missing buffer and an empty one carries intrinsic
semantics, where the logic change is adapting the code to handle
incorrect arguments.
Signed-off-by: Jim Hill <gjthill@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we die() in http-backend, we call a custom handler that
writes an HTTP 500 response to stdout, then reports the
error to stderr. Our routines for writing out the HTTP
response may themselves die, leading to us entering die()
again.
When it was originally written, that was OK; our custom
handler keeps a variable to notice this and does not
recurse. However, since cd163d4 (usage.c: detect recursion
in die routines and bail out immediately, 2012-11-14), the
main die() implementation detects recursion before we even
get to our custom handler, and bails without printing
anything useful.
We can handle this case by doing two things:
1. Installing a custom die_is_recursing handler that
allows us to enter up to one level of recursion. Only
the first call to our custom handler will try to write
out the error response. So if we die again, that is OK.
If we end up dying more than that, it is a sign that we
are in an infinite recursion.
2. Reporting the error to stderr before trying to write
out the HTTP response. In the current code, if we do
die() trying to write out the response, we'll exit
immediately from this second die(), and never get a
chance to output the original error (which is almost
certainly the more interesting one; the second die is
just going to be along the lines of "I tried to write
to stdout but it was closed").
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Currently, there is only one attempt to acquire any lockfile, and if
the lock is held by another process, the locking attempt fails
immediately.
This is not such a limitation for loose reference files. First, they
don't take long to rewrite. Second, most reference updates have a
known "old" value, so if another process is updating a reference at
the same moment that we are trying to lock it, then probably the
expected "old" value will not longer be valid, and the update will
fail anyway.
But these arguments do not hold for packed-refs:
* The packed-refs file can be large and take significant time to
rewrite.
* Many references are stored in a single packed-refs file, so it could
be that the other process was changing a different reference than
the one that we are interested in.
Therefore, it is much more likely for there to be spurious lock
conflicts in connection to the packed-refs file, resulting in
unnecessary command failures.
So, if the first attempt to lock the packed-refs file fails, continue
retrying for a configurable length of time before giving up. The
default timeout is 1 second.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Currently, there is only one attempt to lock a file. If it fails, the
whole operation fails.
But it might sometimes be advantageous to try acquiring a file lock a
few times before giving up. So add a new function,
hold_lock_file_for_update_timeout(), that allows a timeout to be
specified. Make hold_lock_file_for_update() a thin wrapper around the
new function.
If timeout_ms is positive, then retry for at least that many
milliseconds to acquire the lock. On each failed attempt, use select()
to wait for a backoff time that increases quadratically (capped at 1
second) and has a random component to prevent two processes from
getting synchronized. If timeout_ms is negative, retry indefinitely.
In a moment we will switch to using the new function when locking
packed-refs.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If you run "git rerere forget foo" in a repository that does
not have rerere enabled, git hits an internal error:
$ git init -q
$ git rerere forget foo
fatal: BUG: attempt to commit unlocked object
The problem is that setup_rerere() will not actually take
the lock if the rerere system is disabled. We should notice
this and return early. We can return with a success code
here, because we know there is nothing to forget.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since 441ed41 ("git pull --tags": error out with a better message.,
2007-12-28), git pull --tags would print a different error message if
git-fetch did not return any merge candidates:
It doesn't make sense to pull all tags; you probably meant:
git fetch --tags
This is because at that time, git-fetch --tags would override any
configured refspecs, and thus there would be no merge candidates. The
error message was thus introduced to prevent confusion.
However, since c5a84e9 (fetch --tags: fetch tags *in addition to*
other stuff, 2013-10-30), git fetch --tags would fetch tags in addition
to any configured refspecs. Hence, if any no merge candidates situation
occurs, it is not because --tags was set. As such, this special error
message is now irrelevant.
To prevent confusion, remove this error message.
Signed-off-by: Paul Tan <pyokagan@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The former seems to just be syntactic sugar for the latter.
And as it's sugar that AsciiDoctor doesn't understand, it
would be nice to avoid it. Since there are only two spots,
and the resulting source is not significantly harder to
read, it's worth doing.
Note that this does slightly affect the generated HTML (it
has an extra newline), but the rendered result for both HTML
and docbook should be the same (since the newline is not
syntactically significant there).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git cat-file bl $blob" failed to barf even though there is no
object type that is "bl".
* jk/type-from-string-gently:
type_from_string_gently: make sure length matches
Teach the codepaths that read .gitignore and .gitattributes files
that these files encoded in UTF-8 may have UTF-8 BOM marker at the
beginning; this makes it in line with what we do for configuration
files already.
* cn/bom-in-gitignore:
attr: skip UTF8 BOM at the beginning of the input file
config: use utf8_bom[] from utf.[ch] in git_parse_source()
utf8-bom: introduce skip_utf8_bom() helper
add_excludes_from_file: clarify the bom skipping logic
dir: allow a BOM at the beginning of exclude files
Access to objects in repositories that borrow from another one on a
slow NFS server unnecessarily got more expensive due to recent code
becoming more cautious in a naive way not to lose objects to pruning.
* jk/prune-mtime:
sha1_file: only freshen packs once per run
sha1_file: freshen pack objects before loose
reachable: only mark local objects as recent
We avoid setting core.worktree when the repository location is the
".git" directory directly at the top level of the working tree, but
the code misdetected the case in which the working tree is at the
root level of the filesystem (which arguably is a silly thing to
do, but still valid).
* jk/init-core-worktree-at-root:
init: don't set core.worktree when initializing /.git
The DECORATE_SHORT_REFS option given to load_ref_decorations()
affects the way a copy of the refname is stored for each decorated
commit, and this forces later steps like current_pointed_by_HEAD()
to adjust their behaviour based on this initial settings.
Instead, we can always store the full refname and then shorten them
when producing the output.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The previous step to teach "log --decorate" to show "HEAD -> master"
instead of "HEAD, master" when showing the commit at the tip of the
'master' branch, when the 'master' branch is checked out, did not
work for "log --decorate=full".
The commands in the "log" family prepare commit decorations for all
refs upfront, and the actual string used in a decoration depends on
how load_ref_decorations() is called very early in the process. By
default, "git log --decorate" stores names with common prefixes such
as "refs/heads" stripped; "git log --decorate=full" stores the full
refnames.
When the current_pointed_by_HEAD() function has to decide if "HEAD"
points at the branch a decoration describes, however, what was
passed to load_ref_decorations() to decide to strip (or keep) such a
common prefix is long lost. This makes it impossible to reliably
tell if a decoration that stores "refs/heads/master", for example,
is the 'master' branch (under "--decorate" with prefix omitted) or
'refs/heads/master' branch (under "--decorate=full").
Keep what was passed to load_ref_decorations() in a global next to
the global variable name_decoration, and use that to decide how to
match what was read from "HEAD" and what is in a decoration.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This makes sure that AsciiDoc does not turn them into links.
Regular AsciiDoc does not catch these cases, but AsciiDoctor
does treat them as links.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Text like "{foo}" triggers an AsciiDoc attribute; we have to
write "\{foo}" to suppress this. But when the "foo" is not a
syntactically valid attribute, we can skip the quoting. This
makes the source nicer to read, and looks better under
Asciidoctor. With AsciiDoc itself, this patch produces no
changes.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Older versions of AsciiDoc would convert the "--" in
"--option" into an emdash. According to 565e135
(Documentation: quote double-dash for AsciiDoc, 2011-06-29),
this is fixed in AsciiDoc 8.3.0. According to bf17126, we
don't support anything older than 8.4.1 anyway, so we no
longer need to worry about quoting.
Even though this does not change the output at all, there
are a few good reasons to drop the quoting:
1. It makes the source prettier to read.
2. We don't quote consistently, which may be confusing when
reading the source.
3. Asciidoctor does not like the quoting, and renders a
literal backslash.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
All of the other options in the list put short and long as
two separate headings.
We can also drop the backslashing of `--`. It isn't used
elsewhere and is unnecessary for modern asciidoc (plus it
confuses asciidoctor).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In AsciiDoc, it is OK to say:
this is my title
-------------------------
but AsciiDoctor is more strict. Let's match the underline to
the title (which also makes the source prettier to read).
The output from AsciiDoc is the same either way.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In list content that wants to continue to a second
paragraph, the "+" continuation and subsequent paragraph
need to be left-aligned. Otherwise AsciiDoc seems to insert
only a linebreak.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Curly braces open an "attribute" in AsciiDoc; if there's no
such attribute, strange things may happen. In this case, the
unquoted "{type}" causes AsciiDoc to omit an entire line of
text from the output. We can fix it by putting the whole
phrase inside literal backticks (which also lets us get rid
of ugly backslash escaping).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
AsciiDoc misparses some text that contains a `literal`
word followed by a fancy `single quote' word, and treats
everything from the start of the literal to the end of the
quote as a single-quoted phrase.
We can work around this by switching the latter to be a
literal, as well. In the first case, this is perhaps what
was intended anyway, as it makes us consistent with the the
earlier literals in the same paragraph. In the second, the
output is arguably better, as we will format our commit
references as <code> blocks.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The asciidoctor renderer is more picky than classic asciidoc,
and insists that the start and end of a code fence be the
same size.
Found with this hacky perl script:
foreach my $fn (@ARGV) {
open(my $fh, '<', $fn);
my ($fence, $fence_lineno, $prev);
while (<$fh>) {
chomp;
if (/^----+$/) {
if ($fence_lineno) {
if ($_ ne $fence) {
print "$fn:$fence_lineno:mismatched fence: ",
length($fence), " != ", length($_), "\n";
}
$fence_lineno = undef;
}
# hacky check to avoid title-underlining
elsif ($prev eq '' || $prev eq '+') {
$fence = $_;
$fence_lineno = $.;
}
}
$prev = $_;
}
}
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* mh/write-refs-sooner-2.3:
ref_transaction_commit(): fix atomicity and avoid fd exhaustion
ref_transaction_commit(): remove the local flags variable
ref_transaction_commit(): inline call to write_ref_sha1()
rename_ref(): inline calls to write_ref_sha1() from this function
commit_ref_update(): new function, extracted from write_ref_sha1()
write_ref_to_lockfile(): new function, extracted from write_ref_sha1()
t7004: rename ULIMIT test prerequisite to ULIMIT_STACK_SIZE
update-ref: test handling large transactions properly
The old code was roughly
for update in updates:
acquire locks and check old_sha
for update in updates:
if changing value:
write_ref_to_lockfile()
commit_ref_update()
for update in updates:
if deleting value:
unlink()
rewrite packed-refs file
for update in updates:
if reference still locked:
unlock_ref()
This has two problems.
Non-atomic updates
==================
The atomicity of the reference transaction depends on all pre-checks
being done in the first loop, before any changes have started being
committed in the second loop. The problem is that
write_ref_to_lockfile() (previously part of write_ref_sha1()), which
is called from the second loop, contains two more checks:
* It verifies that new_sha1 is a valid object
* If the reference being updated is a branch, it verifies that
new_sha1 points at a commit object (as opposed to a tag, tree, or
blob).
If either of these checks fails, the "transaction" is aborted during
the second loop. But this might happen after some reference updates
have already been permanently committed. In other words, the
all-or-nothing promise of "git update-ref --stdin" could be violated.
So these checks have to be moved to the first loop.
File descriptor exhaustion
==========================
The old code locked all of the references in the first loop, leaving
all of the lockfiles open until later loops. Since we might be
updating a lot of references, this could result in file descriptor
exhaustion.
The solution
============
After this patch, the code looks like
for update in updates:
acquire locks and check old_sha
if changing value:
write_ref_to_lockfile()
else:
close_ref()
for update in updates:
if changing value:
commit_ref_update()
for update in updates:
if deleting value:
unlink()
rewrite packed-refs file
for update in updates:
if reference still locked:
unlock_ref()
This fixes both problems:
1. The pre-checks in write_ref_to_lockfile() are now done in the first
loop, before any changes have been committed. If any of the checks
fails, the whole transaction can now be rolled back correctly.
2. All lockfiles are closed in the first loop immediately after they
are created (either by write_ref_to_lockfile() or by close_ref()).
This means that there is never more than one open lockfile at a
time, preventing file descriptor exhaustion.
To simplify the bookkeeping across loops, add a new REF_NEEDS_COMMIT
bit to update->flags, which keeps track of whether the corresponding
lockfile needs to be committed, as opposed to just unlocked. (Since
"struct ref_update" is internal to the refs module, this change is not
visible to external callers.)
This change fixes two tests in t1400.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Instead, work directly with update->flags. This has the advantage that
the REF_DELETING bit, set in the first loop, can be read in the second
loop instead of having to be recomputed. Plus, it was potentially
confusing having both update->flags and flags, which sometimes had
different values.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
That was the last caller, so delete function write_ref_sha1().
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Most of what it does is unneeded from these call sites.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>