Remove an additional blank line between the
headline and the list of conflicted files after
doing a recursive merge.
Signed-off-by: Ralf Thielow <ralf.thielow@googlemail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is used by "git pull" to construct a merge message from list of
remote refs. When pulling redundant set of refs, however, it did not
filter them even though the merge itself discards them as unnecessary.
Teach the command to do the same for consistency.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Instead of waiting until we record the parents of resulting merge, reduce
redundant parents (including our HEAD) immediately after reading them.
The change to t7602 illustrates the essence of the effect of this change.
The octopus merge strategy used to be fed with redundant commits only to
discard them as "up-to-date", but we no longer feed such redundant commits
to it and the affected test degenerates to a regular two-head merge.
And obviously the known-to-be-broken test in t6028 is now fixed.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Move the code around to populate remoteheads list early in the process
before any decision regarding twohead vs octopus and fast-forwardness is
made.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This happens when git merge is run to merge multiple commits that are
descendants of current HEAD (or are HEAD). We've hit this while updating
master to origin/master but accidentaly we called (while being on master):
$ git merge master origin/master
Here is a minimal testcase:
$ git init a && cd a
$ echo a >a && git add a
$ git commit -minitial
$ echo b >a && git add a
$ git commit -msecond
$ git checkout master^
$ git merge master master
Fast-forwarding to: master
Already up-to-date with master
Merge made by the 'octopus' strategy.
a | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
$ git cat-file commit HEAD
tree eebfed94e75e7760540d1485c740902590a00332
parent bd679e85202280b263e20a57639a142fa14c2c64
author Michał Kiedrowicz <michal.kiedrowicz@gmail.com> 1329132996 +0100
committer Michał Kiedrowicz <michal.kiedrowicz@gmail.com> 1329132996 +0100
Merge branches 'master' and 'master' into HEAD
Signed-off-by: Michał Kiedrowicz <michal.kiedrowicz@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
update_local_ref() used to say "[new branch]" when we stored a new ref
outside refs/tags/ hierarchy, but the message is more about what we
fetched, so use the refname at the origin to make that decision.
Also, only call a new ref a "branch" if it's under refs/heads/.
Signed-off-by: Marc Branchaud <marcnarc@xiplink.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This way, the function can look at the remote side to adjust the
informational message it gives.
Signed-off-by: Marc Branchaud <marcnarc@xiplink.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Due to the use of strncpy without explicit NUL termination,
we could end up passing names n1 or n2 that are not NUL-terminated
to queue_diff, which requires NUL-terminated strings.
Ensure that each is NUL terminated.
Signed-off-by: Jim Meyering <meyering@redhat.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If a commit-ish passed to cherry-pick or revert happens to have a file
of the same name, git complains that the argument is ambiguous and
advises to use '--'. To make things worse, the '--' argument is removed
by parse_options, und so passing '--' has no effect.
Instead, always interpret cherry-pick/revert arguments as revisions.
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Various failure modes in the repository detection code path currently
quote the wrong directory in their error message. The working directory
is changed iteratively to the parent directory until a git repository is
found. If the working directory cannot be changed to the parent
directory for some reason, the detection gives up and prints an error
message. The error message should report the current working directory.
Instead of continually updating the 'cwd' variable, which is actually
used to remember the original working directory, the 'offset' variable
is used to keep track of the current working directory. At the point
where the affected error handling code is called, 'offset' already
points to the end of the parent of the working directory, rather than
the current working directory.
Fix this by explicitly using a variable 'offset_parent' and update
'offset' concurrently with the call to chdir.
In a similar fashion, the function get_device_or_die() would print the
original working directory in case of a failure, rather than the current
working directory. Fix this as well by making use of the 'offset'
variable.
Lastly, replace the phrase 'mount parent' with 'mount point'. The former
appears to be a typo.
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Otherwise, passing an invalid option, git stash -v, gave:
git-stash: line 204: $'error: unknown option for \'stash save\':
$option\n To provide a message, use git stash save -- \'$option\'':
command not found
Signed-off-by: Ross Lagerwall <rosslagerwall@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since 88a21979c (fetch/pull: recurse into submodules when necessary) all
fetched commits are examined if they contain submodule changes (unless
configuration or command line options inhibit that). If a newly recorded
submodule commit is not present in the submodule, a fetch is run inside
it to download that commit.
Checking new refs was done in an else branch where it wasn't executed for
tags. This normally isn't a problem because tags are only fetched with
the branches they live on, then checking the new commits in the fetched
branches for submodule commits will also process all tags. But when a
specific tag is fetched (or the refspec contains refs/tags/) commits only
reachable by tags won't be searched for submodule commits, which is a bug.
Fix that by moving the code outside the if/else construct to handle new
tags just like any other ref. The performance impact of adding tags that
most of the time lie on a branch which is checked anyway for new submodule
commit should be minimal, as since 6859de4 (fetch: avoid quadratic loop
checking for updated submodules) all ref-tips are collected first and then
fed to a single rev-list.
Spotted-by: Jeff King <peff@peff.net>
Signed-off-by: Jens Lehmann <Jens.Lehmann@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We give the username and password to curl by sticking them
in a buffer of the form "user:pass" and handing the result
to CURLOPT_USERPWD. Since curl 7.19.1, there is a split
mechanism, where you can specify each element individually.
This has the advantage that a username can contain a ":"
character. It also is less code for us, since we can hand
our strings over to curl directly. And since curl 7.17.0 and
higher promise to copy the strings for us, we we don't even
have to worry about memory ownership issues.
Unfortunately, we have to keep the ugly code for old curl
around, but as it is now nicely #if'd out, we can easily get
rid of it when we decide that 7.19.1 is "old enough".
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we have a credential to give to curl, we must copy it
into a "user:pass" buffer and then hand the buffer to curl.
Old versions of curl did not copy the buffer, and we were
expected to keep it valid. Newer versions of curl will copy
the buffer.
Our solution was to use a strbuf and detach it, giving
ownership of the resulting buffer to curl. However, this
meant that we were leaking the buffer on newer versions of
curl, since curl was just copying it and throwing away the
string we passed. Furthermore, when we replaced a
credential (e.g., because our original one was rejected), we
were also leaking on both old and new versions of curl.
This got even worse in the last patch, which started
replacing the credential (and thus leaking) on every http
request.
Instead, let's use a static buffer to make the ownership
more clear and less leaky. We already keep a static "struct
credential", so we are only handling a single credential at
a time, anyway.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Between v1.7.1 and v1.7.2, 582aa00bdf switched the default "diff"
invocation not to use XDF_NEED_MINIMAL, but this breaks "git blame"
rather badly.
Allow the command line option to ask for an extra careful matching.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is main test case for the original problem that triggered this
patch series. We create a repo with 50k tags and then test whether
git-clone over the smart HTTP protocol succeeds.
Note that we construct the repo in a slightly different way than the
original script used to reproduce the problem. This is because the
original script just created 50k tags all pointing to the same commit,
so if there was a bug where remote-curl.c was not passing all the refs
to fetch-pack we wouldn't know. The clone would succeed even if only one
tag was passed, because all the other tags were pointing at the same SHA
and would be considered present.
Instead we create a repo with 50k independent (dangling) commits and
then tag each of those commits with a unique tag. This way if one of the
tags is not given to fetch-pack, later stages of the clone would
complain about it.
This allows us to test both that the command line overflow was fixed, as
well as that it was fixed in a way that doesn't leave out any of the
refs.
Signed-off-by: Ivan Todoroski <grnch@gmx.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
These test cases focus only on testing the parsing of refs on stdin,
without bothering with the rest of the fetch-pack machinery. We pass in
the refs using different combinations of command line and stdin and then
we watch fetch-pack's stdout to see whether it prints all the refs we
specified (but we ignore their order).
Signed-off-by: Ivan Todoroski <grnch@gmx.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Now that we can throw an arbitrary number of refs at fetch-pack using
its --stdin option, we use it in the remote-curl helper to bypass the
OS command line length limit.
Signed-off-by: Ivan Todoroski <grnch@gmx.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The syntax for the use of mark references in fast-import
demands either a SP (space) or LF (end-of-line) after
a mark reference. Fast-import does not complain when garbage
appears after a mark reference in some cases.
Factor out parsing of mark references and complain if
errant characters are found. Also be a little more careful
when parsing "inline" and SHA1s, complaining if extra
characters appear or if the form of the dataref is unrecognized.
Buggy input can cause fast-import to produce the wrong output,
silently, without error. This makes it difficult to track
down buggy generators of fast-import streams. An example is
seen in the last line of this commit command:
commit refs/heads/S2
committer Name <name@example.com> 1112912893 -0400
data <<COMMIT
commit message
COMMIT
from :1M 100644 :103 hello.c
It is missing a newline and should be:
[...]
from :1
M 100644 :103 hello.c
What fast-import does is to produce a commit with the same
contents for hello.c as in refs/heads/S2^. What the buggy
program was expecting was the contents of blob :103. While
the resulting commit graph looked correct, the contents in
some commits were wrong.
Signed-off-by: Pete Wyckoff <pw@padd.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Check if we even have a parameter before checking its value. Running
this command without any arguments may not make a lot of sense, but
reacting with a segmentation fault is unduly harsh.
While we're at it, avoid casting argv by declaring it const right away.
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add void to make it match its definition in submodule.c.
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
HTTP authentication is currently handled by get_refs and fetch_ref, but
not by fetch_object, fetch_pack or fetch_alternates. In the
single-threaded case, this is not an issue, since get_refs is always
called first. It recognigzes the 401 and prompts the user for
credentials, which will then be used subsequently.
If the curl multi interface is used, however, only the multi handle used
by get_refs will have credentials configured. Requests made by other
handles fail with an authentication error.
Fix this by setting CURLOPT_USERPWD whenever a slot is requested.
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Create a repo with multiple loose objects in order to demonstrate http
authentication breakage.
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When git-rebase--interactive stops due to a conflict and the only change
to be committed is in a submodule, the test for whether there is
anything to be committed ignores the staged submodule change. This
leads rebase to skip creating the commit for the change.
While unstaged submodule changes should be ignored to avoid needing to
update submodules during a rebase, it is safe to remove the
--ignore-submodules option to diff-index because --cached ensures that
it is only checking the index. This was discussed in [1] and a test is
included to ensure that unstaged changes are still ignored correctly.
[1] http://thread.gmane.org/gmane.comp.version-control.git/188713
Signed-off-by: John Keeping <john@keeping.me.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* tr/cache-tree:
t0090: be prepared that 'wc -l' writes leading blanks
reset: update cache-tree data when appropriate
commit: write cache-tree data when writing index anyway
Refactor cache_tree_update idiom from commit
Test the current state of the cache-tree optimization
Add test-scrap-cache-tree
* cb/maint-t5541-make-server-port-portable:
t5541: check error message against the real port number used
remote-curl: Fix push status report when all branches fail
* tr/maint-bundle-boundary:
bundle: keep around names passed to add_pending_object()
t5510: ensure we stay in the toplevel test dir
t5510: refactor bundle->pack conversion
* tr/maint-bundle-long-subject:
t5704: match tests to modern style
strbuf: improve strbuf_get*line documentation
bundle: use a strbuf to scan the log for boundary commits
bundle: put strbuf_readline_fd in strbuf.c with adjustments
Otherwise:
/usr/bin/perl Makefile.PL PREFIX='/opt/git' INSTALL_BASE=''
Can't locate ExtUtils/MakeMaker.pm in @INC (@INC contains: ...) at Makefile.PL line 1.
BEGIN failed--compilation aborted at Makefile.PL line 1.
make[1]: *** [perl.mak] Error 2
make: *** [perl/perl.mak] Error 2
Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When execvp reports EACCES, it can be one of two things:
1. We found a file to execute, but did not have
permissions to do so.
2. We did not have permissions to look in some directory
in the $PATH.
In the former case, we want to consider this a
permissions problem and report it to the user as such (since
getting this for something like "git foo" is likely a
configuration error).
In the latter case, there is a good chance that the
inaccessible directory does not contain anything of
interest. Reporting "permission denied" is confusing to the
user (and prevents our usual "did you mean...?" lookup). It
also prevents git from trying alias lookup, since we do so
only when an external command does not exist (not when it
exists but has an error).
This patch detects EACCES from execvp, checks whether we are
in case (2), and if so converts errno to ENOENT. This
behavior matches that of "bash" (but not of simpler shells
that use execvp more directly, like "dash").
Test stolen from Junio.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The POSIX standard specifies a return type of int for all six exec
functions. In addition, all exec functions return -1 on error, and
simply do not return on success. However, the current emulation of
the exec functions on mingw are declared with a void return type.
This would cause a problem should any code attempt to call the
exec function in a non-void context. In particular, if an exec
function were used in a conditional it would fail to compile.
In order to improve the fidelity of the emulation, we change the
return type of the mingw_execv[p] functions to int and return -1
on error.
Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The user can say "git push" without specifying any refspec. When using
the "upstream" semantics via the push.default configuration, the user
wants to update the "upstream" branch of the current branch, which is the
branch at a remote repository the current branch is set to integrate with,
with this command.
However, there are cases that such a "git push" that uses the "upstream"
semantics does not make sense:
- The current branch does not have branch.$name.remote configured. By
definition, "git push" that does not name where to push to will not
know where to push to. The user may explicitly say "git push $there",
but again, by definition, no branch at repository $there is set to
integrate with the current branch in this case and we wouldn't know
which remote branch to update.
- The current branch does have branch.$name.remote configured, but it
does not specify branch.$name.merge that names what branch at the
remote this branch integrates with. "git push" knows where to push in
this case (or the user may explicitly say "git push $remote" to tell us
where to push), but we do not know which remote branch to update.
- The current branch does have its remote and upstream branch configured,
but the user said "git push $there", where $there is not the remote
named by "branch.$name.remote". By definition, no branch at repository
$there is set to integrate with the current branch in this case, and
this push is not meant to update any branch at the remote repository
$there.
The first two cases were already checked correctly, but the third case was
not checked and we ended up updating the branch named branch.$name.merge
at repository $there, which was totally bogus.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When "add -p" sees an unmerged entry, it shows the combined
diff and then immediately skips the hunk. This can be
confusing in a variety of ways, depending on whether there
are other changes to stage (in which case you get the
superfluous combined diff output in between other hunks) or
not (in which case you get the combined diff and the program
exits immediately, rather than seeing "No changes").
The current behavior was not planned, and is just what the
implementation happens to do. Instead, let's explicitly
remove unmerged entries from our list of modified files, and
print a warning that we are ignoring them.
We can cheaply find which entries are unmerged by adding
"--raw" output to the "diff-files --numstat" we already run.
There is one non-obvious thing we must change when parsing
this combined output. Before this patch, when we saw a
numstat line for a file that did not have index changes, we
would create a new record with 'unchanged' in the 'INDEX'
field. Because "--raw" comes before "--numstat", we must
move this special-case down to the raw-line case (and it is
sufficient to move it rather than handle it in both places,
since any file which has a --numstat will also have a --raw
entry).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The description of "commit -t <file>" said the file is used "as the
initial version" of the commit message, but in the context of an SCM,
"version" is a loaded word that can needlesslyl confuse readers.
Explain the purpose of the mechanism without using "version".
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If a remote repo has too many tags (or branches), cloning it over the
smart HTTP transport can fail because remote-curl.c puts all the refs
from the remote repo on the fetch-pack command line. This can make the
command line longer than the global OS command line limit, causing
fetch-pack to fail.
This is especially a problem on Windows where the command line limit is
orders of magnitude shorter than Linux. There are already real repos out
there that msysGit cannot clone over smart HTTP due to this problem.
Here is an easy way to trigger this problem:
git init too-many-refs
cd too-many-refs
echo bla > bla.txt
git add .
git commit -m test
sha=$(git rev-parse HEAD)
tag=$(perl -e 'print "bla" x 30')
for i in `seq 50000`; do
echo $sha refs/tags/$tag-$i >> .git/packed-refs
done
Then share this repo over the smart HTTP protocol and try cloning it:
$ git clone http://localhost/.../too-many-refs/.git
Cloning into 'too-many-refs'...
fatal: cannot exec 'fetch-pack': Argument list too long
50k tags is obviously an absurd number, but it is required to
demonstrate the problem on Linux because it has a much more generous
command line limit. On Windows the clone fails with as little as 500
tags in the above loop, which is getting uncomfortably close to the
number of tags you might see in real long lived repos.
This is not just theoretical, msysGit is already failing to clone our
company repo due to this. It's a large repo converted from CVS, nearly
10 years of history.
Four possible solutions were discussed on the Git mailing list (in no
particular order):
1) Call fetch-pack multiple times with smaller batches of refs.
This was dismissed as inefficient and inelegant.
2) Add option --refs-fd=$n to pass a an fd from where to read the refs.
This was rejected because inheriting descriptors other than
stdin/stdout/stderr through exec() is apparently problematic on Windows,
plus it would require changes to the run-command API to open extra
pipes.
3) Add option --refs-from=$tmpfile to pass the refs using a temp file.
This was not favored because of the temp file requirement.
4) Add option --stdin to pass the refs on stdin, one per line.
In the end this option was chosen as the most efficient and most
desirable from scripting perspective.
There was however a small complication when using stdin to pass refs to
fetch-pack. The --stateless-rpc option to fetch-pack also uses stdin for
communication with the remote server.
If we are going to sneak refs on stdin line by line, it would have to be
done very carefully in the presence of --stateless-rpc, because when
reading refs line by line we might read ahead too much data into our
buffer and eat some of the remote protocol data which is also coming on
stdin.
One way to solve this would be to refactor get_remote_heads() in
fetch-pack.c to accept a residual buffer from our stdin line parsing
above, but this function is used in several places so other callers
would be burdened by this residual buffer interface even when most of
them don't need it.
In the end we settled on the following solution:
If --stdin is specified without --stateless-rpc, fetch-pack would read
the refs from stdin one per line, in a script friendly format.
However if --stdin is specified together with --stateless-rpc,
fetch-pack would read the refs from stdin in packetized format
(pkt-line) with a flush packet terminating the list of refs. This way we
can read the exact number of bytes that we need from stdin, and then
get_remote_heads() can continue reading from the same fd without losing
a single byte of remote protocol data.
This way the --stdin option only loses generality and scriptability when
used together with --stateless-rpc, which is not easily scriptable
anyway because it also uses pkt-line when talking to the remote server.
Signed-off-by: Ivan Todoroski <grnch@gmx.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>