Change URL handling to allow external protocol handlers to implement
new protocols without the '::' syntax if helper name does not conflict
with any built-in protocol.
foo:// now invokes git-remote-foo with foo:// as the URL.
Signed-off-by: Ilari Liusvaara <ilari.liusvaara@elisanet.fi>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* master: (334 commits)
bash: update 'git commit' completion
Git 1.6.5.5
Fix diff -B/--dirstat miscounting of newly added contents
reset: improve worktree safety valves
Documentation: Avoid use of xmlto --stringparam
archive: clarify description of path parameter
rerere: don't segfault on failure to open rr-cache
Prepare for 1.6.5.5
gitweb: Describe (possible) gitweb.js minification in gitweb/README
Documentation: xmlto 0.0.18 does not know --stringparam
Fix crasher on encountering SHA1-like non-note in notes tree
t9001: use older Getopt::Long boolean prefix '--no' rather than '--no-'
t4201: use ISO8859-1 rather than ISO-8859-1
Git 1.6.5.4
Unconditionally set man.base.url.for.relative.links
Documentation/Makefile: allow man.base.url.for.relative.link to be set from Make
Git 1.6.6-rc1
git-pull.sh: Fix call to git-merge for new command format
Prepare for 1.6.5.4
merge: do not add standard message when message is given with -m option
...
Conflicts:
Documentation/git-remote-helpers.txt
Makefile
builtin-ls-remote.c
builtin-push.c
transport-helper.c
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* sp/smart-http: (37 commits)
http-backend: Let gcc check the format of more printf-type functions.
http-backend: Fix access beyond end of string.
http-backend: Fix bad treatment of uintmax_t in Content-Length
t5551-http-fetch: Work around broken Accept header in libcurl
t5551-http-fetch: Work around some libcurl versions
http-backend: Protect GIT_PROJECT_ROOT from /../ requests
Git-aware CGI to provide dumb HTTP transport
http-backend: Test configuration options
http-backend: Use http.getanyfile to disable dumb HTTP serving
test smart http fetch and push
http tests: use /dumb/ URL prefix
set httpd port before sourcing lib-httpd
t5540-http-push: remove redundant fetches
Smart HTTP fetch: gzip requests
Smart fetch over HTTP: client side
Smart push over HTTP: client side
Discover refs via smart HTTP server when available
http-backend: more explict LocationMatch
http-backend: add example for gitweb on same URL
http-backend: use mod_alias instead of mod_rewrite
...
Conflicts:
.gitignore
remote-curl.c
The common case for remote helpers will be to import some repository
which can be specified by a single URL. Support this use case by
allowing users to say:
git clone hg::https://soc.googlecode.com/hg/ soc
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Sverre Rabbelier <srabbelier@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If this is set, the url is not required, and the transport always uses
a helper named "git-remote-<value>".
It is a separate configuration option in order to allow a sensible
configuration for foreign systems which either have no meaningful urls
for repositories or which require urls that do not specify the system
used by the repository at that location. However, this only affects
how the name of the helper is determined, not anything about the
interaction with the helper, and the contruction is such that, if the
foreign scm does happen to use a co-named url method, a url with that
method may be used directly.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Sverre Rabbelier <srabbelier@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This allows the transport to use the null sha1 for a ref reported to
be present in the remote repository to indicate that a ref exists but
its actual value is presently unknown and will be set if the objects
are fetched.
Also adds documentation to the API to specify exactly what the methods
should do and how they should interpret arguments.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Sverre Rabbelier <srabbelier@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
For fetch and ls-remote, which use the first url of a remote, have
transport_get() determine this by passing a remote and passing NULL
for the url. For push, which uses every url of a remote, use each url
in turn if there are any, and use NULL if there are none.
This will allow the transport code to do something different if the
location is not specified with a url.
Also, have the message for a fetch say "foreign" if there is no url.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Sverre Rabbelier <srabbelier@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The git-remote-curl backend detects if the remote server supports
the git-receive-pack service, and if so, runs git-send-pack in a
pipe to dump the command and pack data as a single POST request.
The advertisements from the server that were obtained during the
discovery are passed into git-send-pack before the POST request
starts. This permits git-send-pack to operate largely unmodified.
For smaller packs (those under 1 MiB) a HTTP/1.0 POST with a
Content-Length is used, permitting interaction with any server.
The 1 MiB limit is arbitrary, but is sufficent to fit most deltas
created by human authors against text sources with the occasional
small binary file (e.g. few KiB icon image). The configuration
option http.postBuffer can be used to increase (or shink) this
buffer if the default is not sufficient.
For larger packs which cannot be spooled entirely into the helper's
memory space (due to http.postBuffer being too small), the POST
request requires HTTP/1.1 and sets "Transfer-Encoding: chunked".
This permits the client to upload an unknown amount of data in one
HTTP transaction without needing to pregenerate the entire pack
file locally.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
cmd_ls_remote() was calling transport_get() with a NULL remote and a
non-NULL url in the case where it was run outside a git
repository. This involved a bunch of ill-tested special
cases. Instead, simply get the struct remote for the URL with
remote_get(), which works fine outside a git repository, and can also
take global options into account.
This fixes a tiny and obscure bug where "git ls-remote" without a repo
didn't support global url.*.insteadOf, even though "git clone" and
"git ls-remote" in any repo did.
Also, enforce that all callers provide a struct remote to transport_get().
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The remote helper interface now supports the push capability,
which can be used to ask the implementation to push one or more
specs to the remote repository. For remote-curl we implement this
by calling the existing WebDAV based git-http-push executable.
Internally the helper interface uses the push_refs transport hook
so that the complexity of the refspec parsing and matching can be
reused between remote implementations. When possible however the
helper protocol uses source ref name rather than the source SHA-1,
thereby allowing the helper to access this name if it is useful.
>From Clemens Buchacher <drizzd@aon.at>:
update http tests according to remote-curl capabilities
o Pushing packed refs is now fixed.
o The transport helper fails if refs are already up-to-date. Add
a test for that.
o The transport helper will notice if refs are already
up-to-date. We therefore need to update server info in the
unpacked-refs test.
o The transport helper will purge deleted branches automatically.
o Use a variable ($ORIG_HEAD) instead of full SHA-1 name.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
CC: Mike Hommey <mh@glandium.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The variable is assigned unconditionally in print_push_status, but
print_push_status is not reached by all codepaths. In particular, this
fixes a bug where "git push ... nonexisting-branch" was complaining about
non-fast forward.
Signed-off-by: Matthieu Moy <Matthieu.Moy@imag.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* db/vcs-helper:
Makefile: remove remnant of separate http/https/ftp helpers
Use a clearer style to issue commands to remote helpers
Make the "traditionally-supported" URLs a special case
Makefile: install hardlinks for git-remote-<scheme> supported by libcurl if possible
Makefile: do not link three copies of git-remote-* programs
Makefile: git-http-fetch does not need expat
http-fetch: Fix Makefile dependancies
Add transport native helper executables to .gitignore
git-http-fetch: not a builtin
Use an external program to implement fetching with curl
Add support for external programs for handling native fetches
Instead of trying to make http://, https://, and ftp:// URLs
indicative of some sort of pattern of transport helper usage, make
them a special case which runs the "curl" helper, and leave the
mechanism by which arbitrary helpers will be chosen entirely to future
work.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This should have been part of 481c7a6, whose goal was to
make "git push -q" silent unless there is an error.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If all refs sent by the remote repo during a fetch are reachable
locally, then no further conversation is performed with the remote. This
check is skipped when the --depth argument is provided to allow the
deepening of a shallow clone which corresponding remote repo has no
changed.
However, some additional filtering was added in commit c29727d5 to
remove those refs which are equal on both sides. If the remote repo has
not changed, then the list of refs to give the remote process becomes
empty and simply attempting to deepen a shallow repo always fails.
Let's stop being smart in that case and simply send the whole list over
when that condition is met. The remote will do the right thing anyways.
Test cases for this issue are also provided.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* maint:
push: point to 'git pull' and 'git push --force' in case of non-fast forward
Documentation: add: <filepattern>... is optional
Change mentions of "git programs" to "git commands"
Documentation: merge: one <remote> is required
help.c: give correct structure's size to memset()
'git push' failing because of non-fast forward is a very common situation,
and a beginner does not necessarily understand "fast forward" immediately.
Add a new section to the git-push documentation and refer them to it.
Signed-off-by: Matthieu Moy <Matthieu.Moy@imag.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Nanako Shiraishi <nanako3@lavabit.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* js/run-command-updates:
api-run-command.txt: describe error behavior of run_command functions
run-command.c: squelch a "use before assignment" warning
receive-pack: remove unnecessary run_status report
run_command: report failure to execute the program, but optionally don't
run_command: encode deadly signal number in the return value
run_command: report system call errors instead of returning error codes
run_command: return exit code as positive value
MinGW: simplify waitpid() emulation macros
When --quiet is given, the user generally only wants to see
errors. So let's suppress printing the ref status table
unless there is an error, in which case we print out the
whole table.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When pushing over the git protocol, pack-objects gives
progress reports about the pack being sent. If "push" is
given the --quiet flag, it now passes "-q" to pack-objects,
suppressing this output.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Use the transport native helper mechanism to fetch by http (and ftp, etc).
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* tr/die_errno:
Use die_errno() instead of die() when checking syscalls
Convert existing die(..., strerror(errno)) to die_errno()
die_errno(): double % in strerror() output just in case
Introduce die_errno() that appends strerror(errno) to die()
In the case where a program was not found, it was still the task of the
caller to report an error to the user. Usually, this is an interesting case
but only few callers actually reported a specific error (though many call
sites report a generic error message regardless of the cause).
With this change the error is reported by run_command, but since there is
one call site in git.c that does not want that, an option is added to
struct child_process, which is used to turn the error off.
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The motivation for this change is that system call failures are serious
errors that should be reported to the user, but only few callers took the
burden to decode the error codes that the functions returned into error
messages.
If at all, then only an unspecific error message was given. A prominent
example is this:
$ git upload-pack . | :
fatal: unable to run 'git-upload-pack'
In this example, git-upload-pack, the external command invoked through the
git wrapper, dies due to SIGPIPE, but the git wrapper does not bother to
report the real cause. In fact, this very error message is copied to the
syslog if git-daemon's client aborts the connection early.
With this change, system call failures are reported immediately after the
failure and only a generic failure code is returned to the caller. In the
above example the error is now to the point:
$ git upload-pack . | :
error: git-upload-pack died of signal
Note that there is no error report if the invoked program terminated with
a non-zero exit code, because it is reasonable to expect that the invoked
program has already reported an error. (But many run_command call sites
nevertheless write a generic error message.)
There was one special return code that was used to identify the case where
run_command failed because the requested program could not be exec'd. This
special case is now treated like a system call failure with errno set to
ENOENT. No error is reported in this case, because the call site in git.c
expects this as a normal result. Therefore, the callers that carefully
decoded the return value still check for this condition.
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If --porcelain is used git-push will produce machine-readable output. The
output status line for each ref will be tab-separated and sent to stdout instead
of stderr. The full symbolic names of the refs will be given. For example
$ git push --dry-run --porcelain master :foobar 2>/dev/null \
| perl -pe 's/\t/ TAB /g'
= TAB refs/heads/master:refs/heads/master TAB [up to date]
- TAB :refs/heads/foobar TAB [deleted]
Signed-off-by: Larry D'Anna <larry@elder-gods.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Lots of die() calls did not actually report the kind of error, which
can leave the user confused as to the real problem. Use die_errno()
where we check a system/library call that sets errno on failure, or
one of the following that wrap such calls:
Function Passes on error from
-------- --------------------
odb_pack_keep open
read_ancestry fopen
read_in_full xread
strbuf_read xread
strbuf_read_file open or strbuf_read_file
strbuf_readlink readlink
write_in_full xwrite
Signed-off-by: Thomas Rast <trast@student.ethz.ch>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* rc/http-push: (22 commits)
http*: add helper methods for fetching objects (loose)
http*: add helper methods for fetching packs
http: use new http API in fetch_index()
http*: add http_get_info_packs
http-push.c::fetch_symref(): use the new http API
http-push.c::remote_exists(): use the new http API
http.c::http_fetch_ref(): use the new http API
transport.c::get_refs_via_curl(): use the new http API
http.c: new functions for the http API
http: create function end_url_with_slash
http*: move common variables and macros to http.[ch]
transport.c::get_refs_via_curl(): do not leak refs_url
Don't expect verify_pack() callers to set pack_size
http-push: do not SEGV after fetching a bad pack idx file
http*: copy string returned by sha1_to_hex
http-walker: verify remote packs
http-push, http-walker: style fixes
t5550-http-fetch: test fetching of packed objects
http-push: fix missing "#ifdef USE_CURL_MULTI" around "is_running_queue"
http-push: send out fetch requests on queue
...
Avoid code duplication by moving list tail search to match_refs().
This does not change the semantics, except for http-push, which now inserts
to the front of the ref list in order to get rid of the global remote_tail.
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* ar/unlink-err:
print unlink(2) errno in copy_or_link_directory
replace direct calls to unlink(2) with unlink_or_warn
Introduce an unlink(2) wrapper which gives warning if unlink failed
In preparation to be used when the ref object is not available
Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com>
Acked-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This helps to notice when something's going wrong, especially on
systems which lock open files.
I used the following criteria when selecting the code for replacement:
- it was already printing a warning for the unlink failures
- it is in a function which already printing something or is
called from such a function
- it is in a static function, returning void and the function is only
called from a builtin main function (cmd_)
- it is in a function which handles emergency exit (signal handlers)
- it is in a function which is obvously cleaning up the lockfiles
Signed-off-by: Alex Riesen <raa.lkml@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When pulling from a remote, the full URL including username
is by default added to the commit message. Since it adds
very little value but could be used by malicious people to
glean valid usernames (with matching hostnames), we're far
better off just stripping the username before storing the
remote URL locally.
Note that this patch has no lasting visible effect when
"git pull" does not create a merge commit. It simply
alters what gets written to .git/FETCH_HEAD, which is used
by "git merge" to automagically create its messages.
Signed-off-by: Andreas Ericsson <ae@op5.se>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Earlier, the rsync tests were disabled by default, as they needed a
running rsyncd daemon. This was only due to the limitation that our
rsync transport only allowed full URLs of the form
rsync://<host>/<path>
Relaxing the URLs to allow
rsync:<path>
permitted the change in the tests to run whenever rsync is available,
without requiring a fully configured and running rsyncd.
While at it, the tests were fixed so that they run in directories with a
space in their name.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
For native-protocol pushes (and other protocols as they are converted
to the new method), this moves the refspec match, tracking update, and
report message out of send-pack() and into transport_push(), where it
can be shared completely with other protocols. This also makes fetch
and push more similar in terms of what code is in what file.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A new inline function is_dot_or_dotdot is used to check if the
directory name is either "." or "..". It returns a non-zero value if
the given string is "." or "..". It's applicable to a lot of Git
source code.
Signed-off-by: Alexander Potashev <aspotashev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
With all calls to alloc_ref() gone, we can remove it and then we're free
to give alloc_ref_from_str() the shorter name. It's a much nicer
interface, as the callers always need to have a name string when they
allocate a ref anyway and don't need to calculate and pass its length+1
any more.
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Replace pairs of alloc_ref() and strcpy() with alloc_ref_from_str(),
simplifying the code.
In connect.c, also a pair of alloc_ref() and memcpy() is replaced --
the additional cost of a strlen() call should not have too much of an
impact. Consistency and simplicity are more important.
In remote.c, the code was allocating 11 bytes more than needed for
the name part, but I couldn't see them being used for anything.
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The new -v option forces the progressbar, even in case the output
is not a terminal. This can be useful if the caller is an IDE or
wrapper which wants to scrape the progressbar from stderr and show
its information in a different format.
Signed-off-by: Miklos Vajna <vmiklos@frugalware.org>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* maint:
Update release notes for 1.6.0.3
Teach rebase -i to honor pre-rebase hook
docs: describe pre-rebase hook
do not segfault if make_cache_entry failed
make prefix_path() never return NULL
fix bogus "diff --git" header from "diff --no-index"
Fix fetch/clone --quiet when stdout is connected
builtin-blame: Fix blame -C -C with submodules.
bash: remove fetch, push, pull dashed form leftovers
Conflicts:
diff.c
Fixes the `git clone --quiet` issue raised by Dave Jones in
http://marc.info/?l=git&m=121529226023180&w=2
With this simple patch applied we no longer see the following remote
messages as no-progress is correctly sent to the remote site:
remote: Counting objects: 84102, done.
remote: Compressing objects: 100% (24720/24720), done.
remote: Total 84102 (delta 60949), reused 80810 (delta 57900)
Signed-off-by: Tuncer Ayaz <tuncer.ayaz@gmail.com>
Acked-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
"git push" enhancement allows the receiving end to report not only its own
refs but refs in repositories it borrows from via the alternate object
store mechanism. By telling the sender that objects reachable from these
extra refs are already complete in the receiving end, the number of
objects that need to be transfered can be cut down.
These entries are sent over the wire with string ".have", instead of the
actual names of the refs. This string was chosen so that they are ignored
by older programs at the sending end. If we sent some random but valid
looking refnames for these entries, "matching refs" rule (triggered when
running "git push" without explicit refspecs, where the sender learns what
refs the receiver has, and updates only the ones with the names of the
refs the sender also has) and "delete missing" rule (triggered when "git
push --mirror" is used, where the sender tells the receiver to delete the
refs it itself does not have) would try to update/delete them, which is
not what we want.
This prepares the send-pack (and "push" that runs native protocol) to
accept extended existing ref information and make use of it. The ".have"
entries are excluded from ref matching rules, and are exempt from deletion
rule while pushing with --mirror option, but are still used for pack
generation purposes by providing more "bottom" range commits.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Currently, when cloning from invalid HTTP URL, git clone will possibly
return curl error, then a confusing message about remote HEAD and then
return success and leave an empty repository behind, confusing either
the end-user or the automated service calling it (think repo.or.cz).
This patch changes the error() calls in get_refs_via_curl() to die()s,
akin to the other get_refs_*() functions.
Cc: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Petr Baudis <pasky@suse.cz>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* qq/maint:
clone -q: honor "quiet" option over native transports.
attribute documentation: keep EXAMPLE at end
builtin-commit.c: Use 'git_config_string' to get 'commit.template'
http.c: Use 'git_config_string' to clean up SSL config.
diff.c: Use 'git_config_string' to get 'diff.external'
convert.c: Use 'git_config_string' to get 'smudge' and 'clean'
builtin-log.c: Use 'git_config_string' to get 'format.subjectprefix' and 'format.suffix'
Documentation cvs: Clarify when a bare repository is needed
Documentation: be precise about which date --pretty uses
Conflicts:
Documentation/gitattributes.txt
The earlier built-in conversion seems to have broken "git-clone"; this
teaches the command to honor the "-q" option again when talking to the
remote end over native transports (file://, git:// and ssh://).
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If on Windows a path is specified as C:/path, then this is also a valid
SSH URL. To disambiguate between the two interpretations we take an URL
that looks like a path with a drive letter as a local URL.
Signed-off-by: Johannes Sixt <johannes.sixt@telecom.at>
* db/clone-in-c:
Add test for cloning with "--reference" repo being a subset of source repo
Add a test for another combination of --reference
Test that --reference actually suppresses fetching referenced objects
clone: fall back to copying if hardlinking fails
builtin-clone.c: Need to closedir() in copy_or_link_directory()
builtin-clone: fix initial checkout
Build in clone
Provide API access to init_db()
Add a function to set a non-default work tree
Allow for having for_each_ref() list extra refs
Have a constant extern refspec for "--tags"
Add a library function to add an alternate to the alternates file
Add a lockfile function to append to a file
Mark the list of refs to fetch as const
Conflicts:
cache.h
t/t5700-clone-reference.sh
Also fix an underallocation in walker.c::interpret_target().
Signed-off-by: Krzysztof Kowalczyk <kkowalczyk@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fetching the objects doesn't actually modify the list in any of the
code paths, so this will allow code that fetches the entire (const)
list of available refs to just pass the list in directly.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This makes a struct ref able to represent a symref, and makes http.c
able to recognize one, and makes transport.c look for "HEAD" as a ref
in the list, and makes it dereference symrefs for the resulting ref,
if any.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If the remote peer upload-pack process supports the include-tag
protocol extension then we can avoid running a second fetch cycle
on the client side by letting the server send us the annotated tags
along with the objects it is packing for us. In the following graph
we can now fetch both "tag1" and "tag2" on the same connection that
we fetched "master" from the remote when we only have L available
on the local side:
T - tag1 S - tag2
/ /
L - o ------ o ------ B
\ \
\ \
origin/master master
The objects for "tag1" are implicitly downloaded without our direct
knowledge. The existing "quickfetch" optimization within git-fetch
discovers that tag1 is complete after the first connection and does
not open a second connection.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We always report to the user the list of refs we got from the first
connection, even if we do multiple connections. But we should always
use each connection's own list of refs in the communication with the
server, in case we got a different server out of DNS rotation or the
timing was surprising or something.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In transport.c, proxy setting (the one from the remote conf) was set through
curl_easy_setopt() call, while http.c already does the same with the
http.proxy setting. We now just use this infrastructure instead, and make
http_init() now take the struct remote as argument so that it can take the
http_proxy setting from there, and any other property that would be added
later.
At the same time, we make get_http_walker() take a struct remote argument
too, and pass it to http_init(), which makes remote defined proxy be used
for more than get_refs_via_curl().
We leave out http-fetch and http-push, which don't use remotes for the
moment, purposefully.
Signed-off-by: Mike Hommey <mh@glandium.org>
Acked-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
curl versions 7.16.3 to 7.18.0 included had a regression in which https
requests following curl_global_cleanup/init sequence would fail with ASN1
parser errors with curl-gnutls. Such sequences happen in some cases such
as git fetch.
We work around this by removing the http_init and http_cleanup calls from
get_refs_via_curl, replacing them with a transport->data initialization
with the http_walker (which does http_init).
While the http_walker is not currently used in get_refs_via_curl, http
and walker code refactor will make it use it.
Signed-off-by: Mike Hommey <mh@glandium.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
curl versions 7.16.3 to 7.18.0 included had a regression in which https
requests following curl_global_cleanup/init sequence would fail with ASN1
parser errors with curl-gnutls. Such sequences happen in some cases such
as git fetch.
We work around this by removing the http_init and http_cleanup calls from
get_refs_via_curl, replacing them with a transport->data initialization
with the http_walker (which does http_init).
While the http_walker is not currently used in get_refs_via_curl, http
and walker code refactor will make it use it.
Signed-off-by: Mike Hommey <mh@glandium.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This shares the connection between getting the remote ref list and
getting objects in the first batch. (A second connection is still used
to follow tags).
When we do not fetch objects (i.e. either ls-remote disconnects after
getting list of refs, or we decide we are already up-to-date), we
clean up the connection properly; otherwise the connection is left
open in need of cleaning up to avoid getting an error message from
the remote end when ssh is used.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A NUL byte at beginning of file, or just after a newline
would provoke an invalid buf[-1] access in a few places.
* builtin-grep.c (cmd_grep): Don't access buf[-1].
* builtin-pack-objects.c (get_object_list): Likewise.
* builtin-rev-list.c (read_revisions_from_stdin): Likewise.
* bundle.c (read_bundle_header): Likewise.
* server-info.c (read_pack_info_file): Likewise.
* transport.c (insert_packed_refs): Likewise.
Signed-off-by: Jim Meyering <meyering@redhat.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The code calls fetch_pack() to get the list of refs it fetched, and
discards refs and always returns 0 to signal success.
But builtin-fetch-pack.c::fetch_pack() has error cases. The function
returns NULL if error is detected (shallow-support side seems to choose
to die but I suspect that is easily fixable to error out as well).
Make fetch_refs_via_pack() propagate that error to the caller.
Acked-By: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
As well as allowing a default http.proxy option, allow it to be set
per-remote.
Signed-off-by: Sam Vilain <sam.vilain@catalyst.net.nz>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* jk/send-pack: (24 commits)
send-pack: cluster ref status reporting
send-pack: fix "everything up-to-date" message
send-pack: tighten remote error reporting
make "find_ref_by_name" a public function
Fix warning about bitfield in struct ref
send-pack: assign remote errors to each ref
send-pack: check ref->status before updating tracking refs
send-pack: track errors for each ref
git-push: add documentation for the newly added --mirror mode
Add tests for git push'es mirror mode
Update the tracking references only if they were succesfully updated on remote
Add a test checking if send-pack updated local tracking branches correctly
git-push: plumb in --mirror mode
Teach send-pack a mirror mode
send-pack: segfault fix on forced push
Reteach builtin-ls-remote to understand remotes
send-pack: require --verbose to show update of tracking refs
receive-pack: don't mention successful updates
more terse push output
Build in ls-remote
...
Because this function is static and used only by the
http-walker, when NO_CURL is defined, gcc emits a "defined
but not used" warning.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* db/remote-builtin:
Reteach builtin-ls-remote to understand remotes
Build in ls-remote
Use built-in send-pack.
Build-in send-pack, with an API for other programs to call.
Build-in peek-remote, using transport infrastructure.
Miscellaneous const changes and utilities
Conflicts:
transport.c
A --verbose option to push should also be passed to the
transport layer, i.e. git-send-pack, git-http-push.
git push is modified to do so.
Signed-off-by: Steffen Prohaska <prohaska@zib.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The list of remote refs in struct transport should be const, because
builtin-fetch will get confused if it changes.
The url in git_connect should be const (and work on a copy) instead of
requiring the caller to copy it.
match_refs doesn't modify the refspecs it gets.
get_fetch_map and get_remote_ref don't change the list they get.
Allow transport get_refs_list methods to modify the struct transport.
Add a function to copy a list of refs, when a function needs a mutable
copy of a const list.
Add a function to check the type of a ref, as per the code in connect.c
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The variable is always set if it is going to be used; gcc just does
not notice it.
Signed-off-by: Blake Ramsdell <blaker@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This prepares the API of git_connect() and finish_connect() to operate on
a struct child_process. Currently, we just use that object as a placeholder
for the pid that we used to return. A follow-up patch will change the
implementation of git_connect() and finish_connect() to make full use
of the object.
Old code had early-return-on-error checks at the calling sites of
git_connect(), but since git_connect() dies on errors anyway, these checks
were removed.
[sp: Corrected style nit of "conn == NULL" to "!conn"]
Signed-off-by: Johannes Sixt <johannes.sixt@telecom.at>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If the end-user requested a dry-run push we need to pass that flag
over to http-push and additionally make sure it does not actually
upload any changes to the remote server.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If the end-user requested a dry-run push we should pass that flag
though to rsync so that the rsync command can show what it would do
(or not do) if push was to be executed without the --dry-run flag.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
There's a number of tricky conflicts between master and
this topic right now due to the rewrite of builtin-push.
Junio must have handled these via rerere; I'd rather not
deal with them again so I'm pre-merging master into the
topic. Besides this topic somehow started to depend on
the strbuf series that was in next, but is now in master.
It no longer compiles on its own without the strbuf API.
* master: (184 commits)
Whip post 1.5.3.4 maintenance series into shape.
Minor usage update in setgitperms.perl
manual: use 'URL' instead of 'url'.
manual: add some markup.
manual: Fix example finding commits referencing given content.
Fix wording in push definition.
Fix some typos, punctuation, missing words, minor markup.
manual: Fix or remove em dashes.
Add a --dry-run option to git-push.
Add a --dry-run option to git-send-pack.
Fix in-place editing functions in convert.c
instaweb: support for Ruby's WEBrick server
instaweb: allow for use of auto-generated scripts
Add 'git-p4 commit' as an alias for 'git-p4 submit'
hg-to-git speedup through selectable repack intervals
git-svn: respect Subversion's [auth] section configuration values
gtksourceview2 support for gitview
fix contrib/hooks/post-receive-email hooks.recipients error message
Support cvs via git-shell
rebase -i: use diff plumbing instead of porcelain
...
Conflicts:
Makefile
builtin-push.c
rsh.c
There were a few places which did not cope well without curl. This
fixes all of them. We still need to link against the walker.o part
of the library as some parts of transport.o still call into there
even though we don't have HTTP support enabled.
If compiled with NO_CURL=1 we now get the following useful error
message:
$ git-fetch http://www.example.com/git
error: git was compiled without libcurl support.
fatal: Don't know how to fetch from http://www.example.com/git
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This adds a verbosity level below 0 for suppressing default messages
with --quiet, and makes the default for http be verbose instead of
quiet. This matches the behavior of the shell script version of git-fetch.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We lost rsync support when transitioning from shell to C. Support it
again (even if the transport is technically deprecated, some people just
do not have any chance to use anything else).
Also, add a test to t5510. Since rsync transport is not configured by
default on most machines, and especially not such that you can write to
rsync://127.0.0.1$(pwd)/, it is disabled by default; you can enable it by
setting the environment variable TEST_RSYNC.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Currently alloc_ref() expects the length of the refname plus 1
as its parameter, prepares that much space and returns a "ref"
structure for the caller to fill the refname. One caller in
transport.c::get_refs_from_bundle() however allocated one byte
less.
It may be a good idea to change the calling convention to give
alloc_ref() the length of the refname, but that clean-up can be
done in a separate patch. This patch only fixes the bug and
makes all callers consistent.
There was also one overallocation in connect.c, which would not
hurt but was wasteful. This patch fixes it as well.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Most transport implementations tend to allocate a data buffer
in the struct transport instance during transport_get() so we
need to free that data buffer when we disconnect it.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The only way to configure the unpacking limit is currently through
the .git/config (or ~/.gitconfig) mechanism as we have no existing
command line option interface to control this threshold on a per
invocation basis. This was intentional by design as the storage
policy of the repository should be a repository-wide decision and
should not be subject to variations made on individual command
executions.
Earlier builtin-fetch was bypassing the unpacking limit chosen by
the user through the configuration file as it did not reread the
configuration options through fetch_pack_config if we called the
internal fetch_pack() API directly. We now ensure we always run the
config file through fetch_pack_config at least once in this process,
thereby setting our unpackLimit properly.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Copying the arguments from a fetch_pack_args into static globals
within the builtin-fetch-pack module is error-prone and may lead
rise to cases where arguments supplied via the struct from the
new fetch_pack() API may not be honored by the implementation.
Here we reorganize all of the static globals into a single static
struct fetch_pack_args instance and use memcpy() to move the data
from the caller supplied structure into the globals before we
execute our pack fetching implementation. This strategy is more
robust to additions and deletions of properties.
As keep_pack is a single bit we have also introduced lock_pack to
mean not only download and store the packfile via index-pack but
also to lock it against repacking by creating a .keep file when
the packfile itself is stored. The caller must remove the .keep
file when it is safe to do so.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Aside from reducing the code by 20 lines this refactoring removes
a level of indirection when trying to access the operations of a
given transport "instance", making the code clearer and easier to
follow.
It also has the nice effect of giving us the benefits of C99 style
struct initialization (namely ".fetch = X") without requiring that
level of language support from our compiler. We don't need to worry
about new operation methods being added as they will now be NULL'd
out automatically by the xcalloc() we use to create the new struct
transport we supply to the caller.
This pattern already exists in struct walker, so we already have
a precedent for it in Git. We also don't really need to worry
about any sort of performance decreases that may occur as a result
of filling out 4-8 op pointers when we make a "struct transport".
The extra few CPU cycles this requires over filling in the "struct
transport_ops" is killed by the time it will take Git to actually
*use* one of those functions, as most transport operations are
going over the wire or will be copying object data locally between
two directories.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If a transport doesn't support an option we already are telling
the higher level application (fetch or push) that the option is not
valid by sending back a >0 return value from transport_set_option
so there's not a strong motivation to have the function perform the
output itself. Instead we should let the higher level application
do the output if it is necessary. This avoids always telling the
user that depth isn't supported on HTTP urls even when they did
not pass a --depth option to git-fetch.
If the user passes an option and the option value is invalid we now
properly die in git-fetch instead of just spitting out a message
and running anyway. This mimics prior behavior better where
incorrect/malformed options are not accepted by the process.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We don't actually need to know at the time of transport_get if the
caller wants to fetch, push, or do both on the returned object.
It is easier to just delay the initialization of the HTTP walker
until we know we will need it by providing a CURL specific fetch
function in the curl_transport that makes sure the walker instance
is initialized before use.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We always allocate and return a struct transport* right now as every
URL is considered to be a native Git transport if it is not rsync,
http/https/ftp or a bundle. So we can simplify the initialization
of a new transport object by performing one xcalloc call and filling
in only the attributes required.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When using the walker API within builtin-fetch we don't allow
it to update refs locally; instead that action is reserved for
builtin-fetch's own main loop once the objects have actually
been downloaded.
Passing NULL here will bypass the unnecessary malloc/free of a
string buffer within the walker API. That buffer is never used
because the prior argument (the refs to update) is also NULL.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
fetch_pack() can call remove_duplicates() on its input array and
this will possibly overwrite an earlier entry with a later one if
there are any duplicates in the input array. In such a case the
caller here might then attempt to free an item multiple times as
it goes through its cleanup.
I also forgot to free the heads array we pass down into fetch_pack()
when I introduced the allocation of it in this function during my
builtin-fetch cleanup series. Better free it while we are here
working on related memory management fixes.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If we are using a native packfile to perform a git-fetch invocation
and the received packfile contained more than the configured limits
of fetch.unpackLimit/transfer.unpackLimit then index-pack will output
a single line saying "keep\t$sha1\n" to stdout. This line needs to
be captured and retained so we can delete the corresponding .keep
file ("$GIT_DIR/objects/pack/pack-$sha1.keep") once all refs have
been safely updated.
This trick has long been in use with git-fetch.sh and its lower level
helper git-fetch--tool as a way to allow index-pack to save the new
packfile before the refs have been updated and yet avoid a race with
any concurrently running git-repack process. It was unfortunately
lost when git-fetch.sh was converted to pure C and fetch--tool was
no longer being invoked.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit walkers need to know the SHA-1 name of any objects they
have been asked to fetch while the native pack transport only
wants to know the names of the remote refs as the remote side
must do the name->SHA-1 translation.
Since we only have three fetch implementations and one of them
(bundle) doesn't even need the name information we can reduce
the code required to perform a fetch by having just one function
and passing of the filtered list of refs to be fetched. Each
transport can then obtain the information it needs from that ref
array to construct its own internal operation state.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Conflicts:
transport.c
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The ALLOC_GROW macro is a shorter way to implement an array that
grows upon demand as additional items are added to it. We have
mostly standardized upon its use within git and transport.c is
not an exception.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This moves the code to call push backends into a library that can be
extended to make matching fetch and push decisions based on the URL it
gets, and which could be changed to have built-in implementations
instead of calling external programs.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>