To clarify what part of the HTTP transprot is being tested we change
the URLs used by existing tests to include /dumb/ at the start,
indicating they use the non-Git aware code paths.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If LIB_HTTPD_PORT is not set already, lib-httpd will set it to the
default 8111.
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The upload-pack requests are mostly plain text and they compress
rather well. Deflating them with Content-Encoding: gzip can easily
drop the size of the request by 50%, reducing the amount of data
to transfer as we negotiate the common commits.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The git-remote-curl backend detects if the remote server supports
the git-upload-pack service, and if so, runs git-fetch-pack locally
in a pipe to generate the want/have commands.
The advertisements from the server that were obtained during the
discovery are passed into git-fetch-pack before the POST request
starts, permitting server capability discovery and enablement.
Common objects that are discovered are appended onto the request as
have lines and are sent again on the next request. This allows the
remote side to reinitialize its in-memory list of common objects
during the next request.
Because all requests are relatively short, below git-remote-curl's
1 MiB buffer limit, requests will use the standard Content-Length
header and be valid HTTP/1.0 POST requests. This makes the fetch
client more tolerant of proxy servers which don't support HTTP/1.1
or the chunked transfer encoding.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The git-remote-curl backend detects if the remote server supports
the git-receive-pack service, and if so, runs git-send-pack in a
pipe to dump the command and pack data as a single POST request.
The advertisements from the server that were obtained during the
discovery are passed into git-send-pack before the POST request
starts. This permits git-send-pack to operate largely unmodified.
For smaller packs (those under 1 MiB) a HTTP/1.0 POST with a
Content-Length is used, permitting interaction with any server.
The 1 MiB limit is arbitrary, but is sufficent to fit most deltas
created by human authors against text sources with the occasional
small binary file (e.g. few KiB icon image). The configuration
option http.postBuffer can be used to increase (or shink) this
buffer if the default is not sufficient.
For larger packs which cannot be spooled entirely into the helper's
memory space (due to http.postBuffer being too small), the POST
request requires HTTP/1.1 and sets "Transfer-Encoding: chunked".
This permits the client to upload an unknown amount of data in one
HTTP transaction without needing to pregenerate the entire pack
file locally.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Instead of loading the cached info/refs, try to use the smart HTTP
version when the server supports it. Since the smart variant is
actually the pkt-line stream from the start of either upload-pack
or receive-pack we need to parse these through get_remote_heads,
which requires a background thread to feed its pipe.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the git-http-backend examples, only match git-receive-pack within
/git/.
Signed-off-by: Mark Lodato <lodatom@gmail.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the git-http-backend documentation, add an example of how to set up
gitweb and git-http-backend on the same URL by using a series of
mod_alias commands.
Signed-off-by: Mark Lodato <lodatom@gmail.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the git-http-backend documentation, use mod_alias exlusively, instead
of using a combination of mod_alias and mod_rewrite. This makes the
example slightly shorted and a bit more clear.
Signed-off-by: Mark Lodato <lodatom@gmail.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Clarify some of the git-http-backend documentation, particularly:
* In the Description, state that smart/dumb HTTP fetch and smart HTTP
push are supported, state that authenticated clients allow push, and
remove the note that this is only suited for read-only updates.
* At the start of Examples, state explicitly what URL is mapping to what
location on disk.
Signed-off-by: Mark Lodato <lodatom@gmail.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a new environment variable, GIT_PROJECT_ROOT, to override the
method of using PATH_TRANSLATED to find the git repository on disk.
This makes it much easier to configure the web server, especially when
the web server's DocumentRoot does not contain the git repositories,
which is the usual case.
Signed-off-by: Mark Lodato <lodatom@gmail.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Requests for $GIT_URL/git-receive-pack and $GIT_URL/git-upload-pack
are forwarded to the corresponding backend process by directly
executing it and leaving stdin and stdout connected to the invoking
web server. Prior to starting the backend process the HTTP response
headers are sent, thereby freeing the backend from needing to know
about the HTTP protocol.
Requests that are encoded with Content-Encoding: gzip are
automatically inflated before being streamed into the backend.
This is primarily useful for the git-upload-pack backend, which
receives highly repetitive text data from clients that easily
compresses to 50% of its original size.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When --stateless-rpc is passed as a command line parameter to
upload-pack or receive-pack the programs now assume they may
perform only a single read-write cycle with stdin and stdout.
This fits with the HTTP POST request processing model where a
program may read the request, write a response, and must exit.
When --advertise-refs is passed as a command line parameter only
the initial ref advertisement is output, and the program exits
immediately. This fits with the HTTP GET request model, where
no request content is received but a response must be produced.
HTTP headers and/or environment are not processed here, but
instead are assumed to be handled by the program invoking
either service backend.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The git-http-backend CGI can be configured into any Apache server
using ScriptAlias, such as with the following configuration:
LoadModule cgi_module /usr/libexec/apache2/mod_cgi.so
LoadModule alias_module /usr/libexec/apache2/mod_alias.so
ScriptAlias /git/ /usr/libexec/git-core/git-http-backend/
Repositories are accessed via the translated PATH_INFO.
The CGI is backwards compatible with the dumb client, allowing all
older HTTP clients to continue to download repositories which are
managed by the CGI.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The remote helper interface now supports the push capability,
which can be used to ask the implementation to push one or more
specs to the remote repository. For remote-curl we implement this
by calling the existing WebDAV based git-http-push executable.
Internally the helper interface uses the push_refs transport hook
so that the complexity of the refspec parsing and matching can be
reused between remote implementations. When possible however the
helper protocol uses source ref name rather than the source SHA-1,
thereby allowing the helper to access this name if it is useful.
>From Clemens Buchacher <drizzd@aon.at>:
update http tests according to remote-curl capabilities
o Pushing packed refs is now fixed.
o The transport helper fails if refs are already up-to-date. Add
a test for that.
o The transport helper will notice if refs are already
up-to-date. We therefore need to update server info in the
unpacked-refs test.
o The transport helper will purge deleted branches automatically.
o Use a variable ($ORIG_HEAD) instead of full SHA-1 name.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
CC: Mike Hommey <mh@glandium.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Some transports, like the native pack transport implemented by
fetch-pack, support useful features like depth or include tags.
These should be exposed if the underlying helper knows how to
use them.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Some network protocols (e.g. native git://) are able to fetch more
than one ref at a time and reduce the overall transfer cost by
combining the requests into a single exchange. Instead of feeding
each fetch request one at a time to the helper, feed all of them
at once so the helper can decide whether or not it should batch them.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Helpers might want a higher level of verbosity than just +1 (the
porcelain default setting) and +2 (-v -v). Expand the field to
allow verbosity in the range -1..3.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We will need the walker, url and remote in other functions as the
code grows larger to support smart HTTP. Extract this out into a
set of globals we can easily reference once configured.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When multi_ack_detailed is enabled the ACK continue messages returned
by the remote upload-pack are broken out to describe the different
states within the peer. This permits the client to better understand
the server's in-memory state.
The fetch-pack/upload-pack protocol now looks like:
NAK
---------------------------------
Always sent in response to "done" if there was no common base
selected from the "have" lines (or no have lines were sent).
* no multi_ack or multi_ack_detailed:
Sent when the client has sent a pkt-line flush ("0000") and
the server has not yet found a common base object.
* either multi_ack or multi_ack_detailed:
Always sent in response to a pkt-line flush.
ACK %s
-----------------------------------
* no multi_ack or multi_ack_detailed:
Sent in response to "have" when the object exists on the remote
side and is therefore an object in common between the peers.
The argument is the SHA-1 of the common object.
* either multi_ack or multi_ack_detailed:
Sent in response to "done" if there are common objects.
The argument is the last SHA-1 determined to be common.
ACK %s continue
-----------------------------------
* multi_ack only:
Sent in response to "have".
The remote side wants the client to consider this object as
common, and immediately stop transmitting additional "have"
lines for objects that are reachable from it. The reason
the client should stop is not given, but is one of the two
cases below available under multi_ack_detailed.
ACK %s common
-----------------------------------
* multi_ack_detailed only:
Sent in response to "have". Both sides have this object.
Like with "ACK %s continue" above the client should stop
sending have lines reachable for objects from the argument.
ACK %s ready
-----------------------------------
* multi_ack_detailed only:
Sent in response to "have".
The client should stop transmitting objects which are reachable
from the argument, and send "done" soon to get the objects.
If the remote side has the specified object, it should
first send an "ACK %s common" message prior to sending
"ACK %s ready".
Clients may still submit additional "have" lines if there are
more side branches for the client to explore that might be added
to the common set and reduce the number of objects to transfer.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In 41cb7488 Linus moved this function to connect.c for reuse inside
of the git-clone-pack command. That was 2005, but in 2006 Junio
retired git-clone-pack in commit efc7fa53. Since then the only
caller has been fetch-pack. Since this ACK/NAK exchange is only
used by the fetch-pack/upload-pack protocol we should move it back
to be a private detail of fetch-pack.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This change is being offered as a refactoring to make later
commits in the smart HTTP series easier.
By changing the enabled capabilities to be formatted in a strbuf
it is easier to add a new capability to the set of supported
capabilities.
By formatting the want portion of the request into a strbuf and
writing it as a whole block we can later decide to hold onto
the req_buf (instead of releasing it) to recycle in stateless
communications.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When there is an error parsing the 4 byte length component we now
display it as part of the die message, this may hint as to what
data was misunderstood by the application.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
These routines help to work with pkt-line values inside of a strbuf,
permitting simple formatting of buffered network messages.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Check that http.c::finish_http_pack_request() returns 0 (for success).
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Some types of corruption to a pack may confuse the deflate stream
which stores an object. In Andy's reported case a 36 byte region
of the pack was overwritten, leading to what appeared to be a valid
deflate stream that was trying to produce a result larger than our
allocated output buffer could accept.
Z_BUF_ERROR is returned from inflate() if either the input buffer
needs more input bytes, or the output buffer has run out of space.
Previously we only considered the former case, as it meant we needed
to move the stream's input buffer to the next window in the pack.
We now abort the loop if inflate() returns Z_BUF_ERROR without
consuming the entire input buffer it was given, or has filled
the entire output buffer but has not yet returned Z_STREAM_END.
Either state is a clear indicator that this loop is not working
as expected, and should not continue.
This problem cannot occur with loose objects as we open the entire
loose object as a single buffer and treat Z_BUF_ERROR as an error.
Reported-by: Andy Isaacson <adi@hexapodia.org>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* maint:
change throughput display units with fast links
clone: Supply the right commit hash to post-checkout when -b is used
remote-curl: add missing initialization of argv0_path
Switch to MiB/s when the connection is fast enough (i.e. on a LAN).
Signed-off-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we use -b <branch>, we may checkout something else than what the
remote's HEAD references, but we still used remote_head to supply the
new ref value to the post-checkout hook, which is wrong.
So instead of using remote_head to find the value to be passed to the
post-checkout hook, we have to use our_head_points_at, which is always
correctly setup, even if -b is not used.
This also fixes a segfault when "clone -b <branch>" is used with a
remote repo that doesn't have a valid HEAD, as in such a case
remote_head is NULL, but we still tried to access it.
Reported-by: Devin Cofer <ranguvar@archlinux.us>
Signed-off-by: Björn Steinbrink <B.Steinbrink@gmx.de>
Acked-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
All programs, in particular also the stand-alone programs (non-builtins)
must call git_extract_argv0_path(argv[0]) in order to help builds that
derive the installation prefix at runtime, such as the MinGW build.
Without this call, the program segfaults (or raises an assertion
failure).
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Tested-by: Michael Wookey <michaelwookey@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
'git log --graph --oneline --decorate --all' is a useful way to get a
general overview of the repository state, similar to 'gitk --all'.
Let it indicate the position of HEAD by loading that ref too, so that
the --decorate code can see it.
Signed-off-by: Thomas Rast <trast@student.ethz.ch>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Before the --, always attempt ref completion. This helps with
entering the <treeish> arguments to git-grep. As a bonus, you can
work around git-grep's current lack of --all by hitting M-*, ugly as
the resulting command line may be.
Strictly speaking, completing the regular expression argument (or
option argument) makes no sense. However, we cannot prevent _all_
completion (it will fall back to filenames), so we dispense with any
additional complication to detect whether the user still has to enter
a regular expression.
Signed-off-by: Thomas Rast <trast@student.ethz.ch>
Acked-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Essentially; s/type* /type */ as per the coding guidelines.
Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Make "git add -p" to not skip files that are in index even if they are
excluded (by .gitignore etc.). This fixes the contradictory behavior
that "git status" and "git commit -a" listed such files as modified, but
"git add -p FILENAME" ignored them.
Signed-off-by: Pauli Virtanen <pav@iki.fi>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When saying the initial branch is equal to the currently active
remote branch, it is probably intended that the branch heads
point to the same commit. Maybe it would be more useful to a
new user to emphasize that the tree contents and history are the
same.
More important, probably, is that this new branch is set up so
that "git pull" merges changes from the corresponding remote
branch. The next paragraph addresses that directly. What the
reader needs to know to begin with is that (1) the initial branch
is your own; if you do not pull, it won't get updated, and that
(2) the initial branch starts out at the same commit as the
upstream.
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>