Commit Graph

363 Commits

Author SHA1 Message Date
Junio C Hamano
6343e2f6f2 Sync with 2.3.10 2015-09-28 15:28:31 -07:00
Blake Burkhart
b258116462 http: limit redirection depth
By default, libcurl will follow circular http redirects
forever. Let's put a cap on this so that somebody who can
trigger an automated fetch of an arbitrary repository (e.g.,
for CI) cannot convince git to loop infinitely.

The value chosen is 20, which is the same default that
Firefox uses.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-09-25 15:32:28 -07:00
Blake Burkhart
f4113cac0c http: limit redirection to protocol-whitelist
Previously, libcurl would follow redirection to any protocol
it was compiled for support with. This is desirable to allow
redirection from HTTP to HTTPS. However, it would even
successfully allow redirection from HTTP to SFTP, a protocol
that git does not otherwise support at all. Furthermore
git's new protocol-whitelisting could be bypassed by
following a redirect within the remote helper, as it was
only enforced at transport selection time.

This patch limits redirects within libcurl to HTTP, HTTPS,
FTP and FTPS. If there is a protocol-whitelist present, this
list is limited to those also allowed by the whitelist. As
redirection happens from within libcurl, it is impossible
for an HTTP redirect to a protocol implemented within
another remote helper.

When the curl version git was compiled with is too old to
support restrictions on protocol redirection, we warn the
user if GIT_ALLOW_PROTOCOL restrictions were requested. This
is a little inaccurate, as even without that variable in the
environment, we would still restrict SFTP, etc, and we do
not warn in that case. But anything else means we would
literally warn every time git accesses an http remote.

This commit includes a test, but it is not as robust as we
would hope. It redirects an http request to ftp, and checks
that curl complained about the protocol, which means that we
are relying on curl's specific error message to know what
happened. Ideally we would redirect to a working ftp server
and confirm that we can clone without protocol restrictions,
and not with them. But we do not have a portable way of
providing an ftp server, nor any other protocol that curl
supports (https is the closest, but we would have to deal
with certificates).

[jk: added test and version warning]

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-09-25 15:30:39 -07:00
Jeff King
9ae97018fb use strip_suffix and xstrfmt to replace suffix
When we want to convert "foo.pack" to "foo.idx", we do it by
duplicating the original string and then munging the bytes
in place. Let's use strip_suffix and xstrfmt instead, which
has several advantages:

  1. It's more clear what the intent is.

  2. It does not implicitly rely on the fact that
     strlen(".idx") <= strlen(".pack") to avoid an overflow.

  3. We communicate the assumption that the input file ends
     with ".pack" (and get a run-time check that this is so).

  4. We drop calls to strcpy, which makes auditing the code
     base easier.

Likewise, we can do this to convert ".pack" to ".bitmap",
avoiding some manual memory computation.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-09-25 10:18:18 -07:00
Jeff King
5096d4909f convert trivial sprintf / strcpy calls to xsnprintf
We sometimes sprintf into fixed-size buffers when we know
that the buffer is large enough to fit the input (either
because it's a constant, or because it's numeric input that
is bounded in size). Likewise with strcpy of constant
strings.

However, these sites make it hard to audit sprintf and
strcpy calls for buffer overflows, as a reader has to
cross-reference the size of the array with the input. Let's
use xsnprintf instead, which communicates to a reader that
we don't expect this to overflow (and catches the mistake in
case we do).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-09-25 10:18:18 -07:00
Junio C Hamano
ed070a4007 Merge branch 'ep/http-configure-ssl-version'
A new configuration variable http.sslVersion can be used to specify
what specific version of SSL/TLS to use to make a connection.

* ep/http-configure-ssl-version:
  http: add support for specifying the SSL version
2015-08-26 15:45:31 -07:00
Junio C Hamano
51a22ce147 Merge branch 'jc/finalize-temp-file'
Long overdue micro clean-up.

* jc/finalize-temp-file:
  sha1_file.c: rename move_temp_to_file() to finalize_object_file()
2015-08-19 14:48:55 -07:00
Elia Pinto
01861cb7a2 http: add support for specifying the SSL version
Teach git about a new option, "http.sslVersion", which permits one
to specify the SSL version to use when negotiating SSL connections.
The setting can be overridden by the GIT_SSL_VERSION environment
variable.

Signed-off-by: Elia Pinto <gitter.spiros@gmail.com>
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-08-17 10:16:34 -07:00
Junio C Hamano
cb5add5868 sha1_file.c: rename move_temp_to_file() to finalize_object_file()
Since 5a688fe4 ("core.sharedrepository = 0mode" should set, not
loosen, 2009-03-25), we kept reminding ourselves:

    NEEDSWORK: this should be renamed to finalize_temp_file() as
    "moving" is only a part of what it does, when no patch between
    master to pu changes the call sites of this function.

without doing anything about it.  Let's do so.

The purpose of this function was not to move but to finalize.  The
detail of the primarily implementation of finalizing was to link the
temporary file to its final name and then to unlink, which wasn't
even "moving".  The alternative implementation did "move" by calling
rename(2), which is a fun tangent.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-08-10 11:10:37 -07:00
Junio C Hamano
0e521a41b5 Merge branch 'et/http-proxyauth'
We used to ask libCURL to use the most secure authentication method
available when talking to an HTTP proxy only when we were told to
talk to one via configuration variables.  We now ask libCURL to
always use the most secure authentication method, because the user
can tell libCURL to use an HTTP proxy via an environment variable
without using configuration variables.

* et/http-proxyauth:
  http: always use any proxy auth method available
2015-07-13 14:00:24 -07:00
Enrique Tobis
5841520b03 http: always use any proxy auth method available
We set CURLOPT_PROXYAUTH to use the most secure authentication
method available only when the user has set configuration variables
to specify a proxy.  However, libcurl also supports specifying a
proxy through environment variables.  In that case libcurl defaults
to only using the Basic proxy authentication method, because we do
not use CURLOPT_PROXYAUTH.

Set CURLOPT_PROXYAUTH to always use the most secure authentication
method available, even when there is no git configuration telling us
to use a proxy. This allows the user to use environment variables to
configure a proxy that requires an authentication method different
from Basic.

Signed-off-by: Enrique A. Tobis <etobis@twosigma.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-06-29 09:57:43 -07:00
Junio C Hamano
39fa79178f Merge branch 'ls/http-ssl-cipher-list'
Introduce http.<url>.SSLCipherList configuration variable to tweak
the list of cipher suite to be used with libcURL when talking with
https:// sites.

* ls/http-ssl-cipher-list:
  http: add support for specifying an SSL cipher list
2015-05-22 12:41:45 -07:00
Lars Kellogg-Stedman
f6f2a9e42d http: add support for specifying an SSL cipher list
Teach git about a new option, "http.sslCipherList", which permits one to
specify a list of ciphers to use when negotiating SSL connections.  The
setting can be overwridden by the GIT_SSL_CIPHER_LIST environment
variable.

Signed-off-by: Lars Kellogg-Stedman <lars@redhat.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-08 10:56:26 -07:00
Stefan Beller
826aed50cb http: release the memory of a http pack request as well
The cleanup function is used in 4 places now and it's always safe to
free up the memory as well.

Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-03-24 12:36:10 -07:00
Junio C Hamano
74c91d1f7a Merge branch 'ye/http-accept-language'
Compilation fix for a recent topic in 'master'.

* ye/http-accept-language:
  gettext.c: move get_preferred_languages() from http.c
2015-03-06 15:02:25 -08:00
Jeff King
93f7d9108a gettext.c: move get_preferred_languages() from http.c
Calling setlocale(LC_MESSAGES, ...) directly from http.c, without
including <locale.h>, was causing compilation warnings.  Move the
helper function to gettext.c that already includes the header and
where locale-related issues are handled.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-02-26 14:09:20 -08:00
Junio C Hamano
90eea883fd Merge branch 'tc/missing-http-proxyauth'
We did not check the curl library version before using
CURLOPT_PROXYAUTH feature that may not exist.

* tc/missing-http-proxyauth:
  http: support curl < 7.10.7
2015-02-25 15:40:12 -08:00
Junio C Hamano
117c1b333d Merge branch 'jk/dumb-http-idx-fetch-fix' into maint
A broken pack .idx file in the receiving repository prevented the
dumb http transport from fetching a good copy of it from the other
side.

* jk/dumb-http-idx-fetch-fix:
  dumb-http: do not pass NULL path to parse_pack_index
2015-02-24 22:10:37 -08:00
Junio C Hamano
f18e3896f7 Merge branch 'ye/http-accept-language'
Using environment variable LANGUAGE and friends on the client side,
HTTP-based transports now send Accept-Language when making requests.

* ye/http-accept-language:
  http: add Accept-Language header if possible
2015-02-18 11:44:57 -08:00
Junio C Hamano
b93b5b21b5 Merge branch 'jk/dumb-http-idx-fetch-fix'
A broken pack .idx file in the receiving repository prevented the
dumb http transport from fetching a good copy of it from the other
side.

* jk/dumb-http-idx-fetch-fix:
  dumb-http: do not pass NULL path to parse_pack_index
2015-02-17 10:15:25 -08:00
Junio C Hamano
c985aaf879 Merge branch 'jc/unused-symbols'
Mark file-local symbols as "static", and drop functions that nobody
uses.

* jc/unused-symbols:
  shallow.c: make check_shallow_file_for_update() static
  remote.c: make clear_cas_option() static
  urlmatch.c: make match_urls() static
  revision.c: make save_parents() and free_saved_parents() static
  line-log.c: make line_log_data_init() static
  pack-bitmap.c: make pack_bitmap_filename() static
  prompt.c: remove git_getpass() nobody uses
  http.c: make finish_active_slot() and handle_curl_result() static
2015-02-11 13:44:07 -08:00
Tom G. Christensen
1c2dbf2095 http: support curl < 7.10.7
Commit dd61399 introduced support for http proxies that require
authentication but it relies on the CURL_PROXYAUTH option which was
added in curl 7.10.7.
This makes sure proxy authentication is only enabled if libcurl can
support it.

Signed-off-by: Tom G. Christensen <tgc@statsbiblioteket.dk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-02-03 13:53:17 -08:00
Yi EungJun
f18604bbf2 http: add Accept-Language header if possible
Add an Accept-Language header which indicates the user's preferred
languages defined by $LANGUAGE, $LC_ALL, $LC_MESSAGES and $LANG.

Examples:
  LANGUAGE= -> ""
  LANGUAGE=ko:en -> "Accept-Language: ko, en;q=0.9, *;q=0.1"
  LANGUAGE=ko LANG=en_US.UTF-8 -> "Accept-Language: ko, *;q=0.1"
  LANGUAGE= LANG=en_US.UTF-8 -> "Accept-Language: en-US, *;q=0.1"

This gives git servers a chance to display remote error messages in
the user's preferred language.

Limit the number of languages to 1,000 because q-value must not be
smaller than 0.001, and limit the length of Accept-Language header to
4,000 bytes for some HTTP servers which cannot accept such long header.

Signed-off-by: Yi EungJun <eungjun.yi@navercorp.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-01-28 11:17:08 -08:00
Jeff King
8b9c2dd4de dumb-http: do not pass NULL path to parse_pack_index
Once upon a time, dumb http always fetched .idx files
directly into their final location, and then checked their
validity with parse_pack_index. This was refactored in
commit 750ef42 (http-fetch: Use temporary files for
pack-*.idx until verified, 2010-04-19), which uses the
following logic:

  1. If we have the idx already in place, see if it's
     valid (using parse_pack_index). If so, use it.

  2. Otherwise, fetch the .idx to a tempfile, check
     that, and if so move it into place.

  3. Either way, fetch the pack itself if necessary.

However, it got step 1 wrong. We pass a NULL path parameter
to parse_pack_index, so an existing .idx file always looks
broken. Worse, we do not treat this broken .idx as an
opportunity to re-fetch, but instead return an error,
ignoring the pack entirely. This can lead to a dumb-http
fetch failing to retrieve the necessary objects.

This doesn't come up much in practice, because it must be a
packfile that we found out about (and whose .idx we stored)
during an earlier dumb-http fetch, but whose packfile we
_didn't_ fetch. I.e., we did a partial clone of a
repository, didn't need some packfiles, and now a followup
fetch needs them.

Discovery and tests by Charles Bailey <charles@hashpling.org>.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-01-27 12:41:45 -08:00
Junio C Hamano
b90a3d7b32 http.c: make finish_active_slot() and handle_curl_result() static
They used to be used directly by remote-curl.c for the smart-http
protocol. But they got wrapped by run_one_slot() in beed336 (http:
never use curl_easy_perform, 2014-02-18).  Any future users are
expected to follow that route.

Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-01-15 11:00:52 -08:00
brian m. carlson
4dbe66464b remote-curl: fall back to Basic auth if Negotiate fails
Apache servers using mod_auth_kerb can be configured to allow the user
to authenticate either using Negotiate (using the Kerberos ticket) or
Basic authentication (using the Kerberos password).  Often, one will
want to use Negotiate authentication if it is available, but fall back
to Basic authentication if the ticket is missing or expired.

However, libcurl will try very hard to use something other than Basic
auth, even over HTTPS.  If Basic and something else are offered, libcurl
will never attempt to use Basic, even if the other option fails.
Teach the HTTP client code to stop trying authentication mechanisms that
don't use a password (currently Negotiate) after the first failure,
since if they failed the first time, they will never succeed.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-01-07 19:48:19 -08:00
Junio C Hamano
211836f77b Merge branch 'da/include-compat-util-first-in-c'
Code clean-up.

* da/include-compat-util-first-in-c:
  cleanups: ensure that git-compat-util.h is included first
2014-10-14 10:49:01 -07:00
David Aguilar
1c4b660412 cleanups: ensure that git-compat-util.h is included first
CodingGuidelines states that the first #include in C files should be
git-compat-util.h or another header file that includes it, such as
cache.h or builtin.h.

Signed-off-by: David Aguilar <davvid@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-09-15 12:05:14 -07:00
Junio C Hamano
6c1d42acae Merge branch 'br/http-init-fix'
Code clean-up.

* br/http-init-fix:
  http: style fixes for curl_multi_init error check
  http.c: die if curl_*_init fails
2014-09-11 10:33:28 -07:00
Jeff King
8837eb47f2 http: style fixes for curl_multi_init error check
Unless there is a good reason, we should use die() rather than
fprintf/exit. We can also shorten the message to match other curl init
failures (and match our usual lowercase no-full-stop style).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-08-21 10:53:55 -07:00
René Scharfe
d318027932 run-command: introduce CHILD_PROCESS_INIT
Most struct child_process variables are cleared using memset first after
declaration.  Provide a macro, CHILD_PROCESS_INIT, that can be used to
initialize them statically instead.  That's shorter, doesn't require a
function call and is slightly more readable (especially given that we
already have STRBUF_INIT, ARGV_ARRAY_INIT etc.).

Helped-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-08-20 09:53:37 -07:00
Bernhard Reiter
faa3807cfe http.c: die if curl_*_init fails
Signed-off-by: Bernhard Reiter <ockham@raz.or.at>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-08-18 10:11:42 -07:00
Junio C Hamano
e91ae32a01 Merge branch 'jk/skip-prefix'
* jk/skip-prefix:
  http-push: refactor parsing of remote object names
  imap-send: use skip_prefix instead of using magic numbers
  use skip_prefix to avoid repeated calculations
  git: avoid magic number with skip_prefix
  fetch-pack: refactor parsing in get_ack
  fast-import: refactor parsing of spaces
  stat_opt: check extra strlen call
  daemon: use skip_prefix to avoid magic numbers
  fast-import: use skip_prefix for parsing input
  use skip_prefix to avoid repeating strings
  use skip_prefix to avoid magic numbers
  transport-helper: avoid reading past end-of-string
  fast-import: fix read of uninitialized argv memory
  apply: use skip_prefix instead of raw addition
  refactor skip_prefix to return a boolean
  avoid using skip_prefix as a boolean
  daemon: mark some strings as const
  parse_diff_color_slot: drop ofs parameter
2014-07-09 11:33:28 -07:00
Jeff King
de8118e153 use skip_prefix to avoid repeated calculations
In some cases, we use starts_with to check for a prefix, and
then use an already-calculated prefix length to advance a
pointer past the prefix. There are no magic numbers or
duplicated strings here, but we can still make the code
simpler and more obvious by using skip_prefix.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-06-20 10:45:19 -07:00
Yi EungJun
f34a655d4d http: fix charset detection of extract_content_type()
extract_content_type() could not extract a charset parameter if the
parameter is not the first one and there is a whitespace and a following
semicolon just before the parameter. For example:

    text/plain; format=fixed ;charset=utf-8

And it also could not handle correctly some other cases, such as:

    text/plain; charset=utf-8; format=fixed
    text/plain; some-param="a long value with ;semicolons;"; charset=utf-8

Thanks-to: Jeff King <peff@peff.net>
Signed-off-by: Yi EungJun <eungjun.yi@navercorp.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-06-17 15:25:00 -07:00
Jeff King
c553fd1c1e http: default text charset to iso-8859-1
This is specified by RFC 2616 as the default if no "charset"
parameter is given.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-05-27 09:59:22 -07:00
Jeff King
e31316263a http: optionally extract charset parameter from content-type
Since the previous commit, we now give a sanitized,
shortened version of the content-type header to any callers
who ask for it.

This patch adds back a way for them to cleanly access
specific parameters to the type. We could easily extract all
parameters and make them available via a string_list, but:

  1. That complicates the interface and memory management.

  2. In practice, no planned callers care about anything
     except the charset.

This patch therefore goes with the simplest thing, and we
can expand or change the interface later if it becomes
necessary.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-05-27 09:59:19 -07:00
Jeff King
bf197fd7ee http: extract type/subtype portion of content-type
When we get a content-type from curl, we get the whole
header line, including any parameters, and without any
normalization (like downcasing or whitespace) applied.
If we later try to match it with strcmp() or even
strcasecmp(), we may get false negatives.

This could cause two visible behaviors:

  1. We might fail to recognize a smart-http server by its
     content-type.

  2. We might fail to relay text/plain error messages to
     users (especially if they contain a charset parameter).

This patch teaches the http code to extract and normalize
just the type/subtype portion of the string. This is
technically passing out less information to the callers, who
can no longer see the parameters. But none of the current
callers cares, and a future patch will add back an
easier-to-use method for accessing those parameters.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-05-27 09:57:00 -07:00
Junio C Hamano
060be00621 Merge branch 'mh/object-code-cleanup'
* mh/object-code-cleanup:
  sha1_file.c: document a bunch of functions defined in the file
  sha1_file_name(): declare to return a const string
  find_pack_entry(): document last_found_pack
  replace_object: use struct members instead of an array
2014-03-14 14:26:29 -07:00
Michael Haggerty
30d6c6eabf sha1_file_name(): declare to return a const string
Change the return value of sha1_file_name() to (const char *).
(Callers have no business mucking about here.)  Change callers
accordingly, deleting a few superfluous temporary variables along the
way.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-02-24 09:10:22 -08:00
Jeff King
beed336c3e http: never use curl_easy_perform
We currently don't reuse http connections when fetching via
the smart-http protocol. This is bad because the TCP
handshake introduces latency, and especially because SSL
connection setup may be non-trivial.

We can fix it by consistently using curl's "multi"
interface.  The reason is rather complicated:

Our http code has two ways of being used: queuing many
"slots" to be fetched in parallel, or fetching a single
request in a blocking manner. The parallel code is built on
curl's "multi" interface. Most of the single-request code
uses http_request, which is built on top of the parallel
code (we just feed it one slot, and wait until it finishes).

However, one could also accomplish the single-request scheme
by avoiding curl's multi interface entirely and just using
curl_easy_perform. This is simpler, and is used by post_rpc
in the smart-http protocol.

It does work to use the same curl handle in both contexts,
as long as it is not at the same time.  However, internally
curl may not share all of the cached resources between both
contexts. In particular, a connection formed using the
"multi" code will go into a reuse pool connected to the
"multi" object. Further requests using the "easy" interface
will not be able to reuse that connection.

The smart http protocol does ref discovery via http_request,
which uses the "multi" interface, and then follows up with
the "easy" interface for its rpc calls. As a result, we make
two HTTP connections rather than reusing a single one.

We could teach the ref discovery to use the "easy"
interface. But it is only once we have done this discovery
that we know whether the protocol will be smart or dumb. If
it is dumb, then our further requests, which want to fetch
objects in parallel, will not be able to reuse the same
connection.

Instead, this patch switches post_rpc to build on the
parallel interface, which means that we use it consistently
everywhere. It's a little more complicated to use, but since
we have the infrastructure already, it doesn't add any code;
we can just factor out the relevant bits from http_request.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-02-18 15:50:57 -08:00
Junio C Hamano
ad70448576 Merge branch 'cc/starts-n-ends-with'
Remove a few duplicate implementations of prefix/suffix comparison
functions, and rename them to starts_with and ends_with.

* cc/starts-n-ends-with:
  replace {pre,suf}fixcmp() with {starts,ends}_with()
  strbuf: introduce starts_with() and ends_with()
  builtin/remote: remove postfixcmp() and use suffixcmp() instead
  environment: normalize use of prefixcmp() by removing " != 0"
2013-12-17 12:02:44 -08:00
Christian Couder
5955654823 replace {pre,suf}fixcmp() with {starts,ends}_with()
Leaving only the function definitions and declarations so that any
new topic in flight can still make use of the old functions, replace
existing uses of the prefixcmp() and suffixcmp() with new API
functions.

The change can be recreated by mechanically applying this:

    $ git grep -l -e prefixcmp -e suffixcmp -- \*.c |
      grep -v strbuf\\.c |
      xargs perl -pi -e '
        s|!prefixcmp\(|starts_with\(|g;
        s|prefixcmp\(|!starts_with\(|g;
        s|!suffixcmp\(|ends_with\(|g;
        s|suffixcmp\(|!ends_with\(|g;
      '

on the result of preparatory changes in this series.

Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-05 14:13:21 -08:00
Junio C Hamano
c5a77e8f92 Merge branch 'bc/http-100-continue'
Issue "100 Continue" responses to help use of GSS-Negotiate
authentication scheme over HTTP transport when needed.

* bc/http-100-continue:
  remote-curl: fix large pushes with GSSAPI
  remote-curl: pass curl slot_results back through run_slot
  http: return curl's AUTHAVAIL via slot_results
2013-12-05 12:58:59 -08:00
Jeff King
0972ccd97c http: return curl's AUTHAVAIL via slot_results
Callers of the http code may want to know which auth types
were available for the previous request. But after finishing
with the curl slot, they are not supposed to look at the
curl handle again. We already handle returning other
information via the slot_results struct; let's add a flag to
check the available auth.

Note that older versions of curl did not support this, so we
simply return 0 (something like "-1" would be worse, as the
value is a bitflag and we might accidentally set a flag).
This is sufficient for the callers planned in this series,
who only trigger some optional behavior if particular bits
are set, and can live with a fake "no bits" answer.

Signed-off-by: Jeff King <peff@peff.net>
2013-10-31 10:05:55 -07:00
Junio C Hamano
177f0a4009 Merge branch 'jk/http-auth-redirects'
Handle the case where http transport gets redirected during the
authorization request better.

* jk/http-auth-redirects:
  http.c: Spell the null pointer as NULL
  remote-curl: rewrite base url from info/refs redirects
  remote-curl: store url as a strbuf
  remote-curl: make refs_url a strbuf
  http: update base URLs when we see redirects
  http: provide effective url to callers
  http: hoist credential request out of handle_curl_result
  http: refactor options to http_get_*
  http_request: factor out curlinfo_strbuf
  http_get_file: style fixes
2013-10-30 12:09:53 -07:00
Junio C Hamano
bb2fd90c7b Merge branch 'ew/keepalive'
* ew/keepalive:
  http: use curl's tcp keepalive if available
  http: enable keepalive on TCP sockets
2013-10-28 10:43:24 -07:00
Ramsay Jones
70900eda4a http.c: Spell the null pointer as NULL
Commit 1bbcc224 ("http: refactor options to http_get_*", 28-09-2013)
changed the type of final 'options' argument of the http_get_file()
function from an int to an 'struct http_get_options' pointer.
However, it neglected to update the (single) call site. Since this
call was passing '0' to that argument, it was (correctly) being
interpreted as a null pointer. Change to argument to NULL.

Noticed by sparse. ("Using plain integer as NULL pointer")

Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk>
Acked-by: Jeff King <peff@peff.net>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-10-24 14:42:26 -07:00
Jeff King
47ce115370 http: use curl's tcp keepalive if available
Commit a15d069 taught git to use curl's SOCKOPTFUNCTION hook
to turn on TCP keepalives. However, modern versions of curl
have a TCP_KEEPALIVE option, which can do this for us. As an
added bonus, the curl code knows how to turn on keepalive
for a much wider variety of platforms. The only downside to
using this option is that not everybody has a new enough curl.
Let's split our keepalive options into three conditionals:

  1. With curl 7.25.0 and newer, we rely on curl to do it
     right.

  2. With older curl that still knows SOCKOPTFUNCTION, we
     use the code from a15d069.

  3. Otherwise, we are out of luck, and the call is a no-op.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-10-16 11:26:09 -07:00
Jeff King
c93c92f309 http: update base URLs when we see redirects
If a caller asks the http_get_* functions to go to a
particular URL and we end up elsewhere due to a redirect,
the effective_url field can tell us where we went.

It would be nice to remember this redirect and short-cut
further requests for two reasons:

  1. It's more efficient. Otherwise we spend an extra http
     round-trip to the server for each subsequent request,
     just to get redirected.

  2. If we end up with an http 401 and are going to ask for
     credentials, it is to feed them to the redirect target.
     If the redirect is an http->https upgrade, this means
     our credentials may be provided on the http leg, just
     to end up redirected to https. And if the redirect
     crosses server boundaries, then curl will drop the
     credentials entirely as it follows the redirect.

However, it, it is not enough to simply record the effective
URL we saw and use that for subsequent requests. We were
originally fed a "base" url like:

   http://example.com/foo.git

and we want to figure out what the new base is, even though
the URLs we see may be:

     original: http://example.com/foo.git/info/refs
    effective: http://example.com/bar.git/info/refs

Subsequent requests will not be for "info/refs", but for
other paths relative to the base. We must ask the caller to
pass in the original base, and we must pass the redirected
base back to the caller (so that it can generate more URLs
from it). Furthermore, we need to feed the new base to the
credential code, so that requests to credential helpers (or
to the user) match the URL we will be requesting.

This patch teaches http_request_reauth to do this munging.
Since it is the caller who cares about making more URLs, it
seems at first glance that callers could simply check
effective_url themselves and handle it. However, since we
need to update the credential struct before the second
re-auth request, we have to do it inside http_request_reauth.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-10-14 16:56:47 -07:00
Jeff King
78868962c0 http: provide effective url to callers
When we ask curl to access a URL, it may follow one or more
redirects to reach the final location. We have no idea
this has happened, as curl takes care of the details and
simply returns the final content to us.

The final URL that we ended up with can be accessed via
CURLINFO_EFFECTIVE_URL. Let's make that optionally available
to callers of http_get_*, so that they can make further
decisions based on the redirection.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-10-14 16:55:23 -07:00
Jeff King
2501aff8b7 http: hoist credential request out of handle_curl_result
When we are handling a curl response code in http_request or
in the remote-curl RPC code, we use the handle_curl_result
helper to translate curl's response into an easy-to-use
code. When we see an HTTP 401, we do one of two things:

  1. If we already had a filled-in credential, we mark it as
     rejected, and then return HTTP_NOAUTH to indicate to
     the caller that we failed.

  2. If we didn't, then we ask for a new credential and tell
     the caller HTTP_REAUTH to indicate that they may want
     to try again.

Rejecting in the first case makes sense; it is the natural
result of the request we just made. However, prompting for
more credentials in the second step does not always make
sense. We do not know for sure that the caller is going to
make a second request, and nor are we sure that it will be
to the same URL. Logically, the prompt belongs not to the
request we just finished, but to the request we are (maybe)
about to make.

In practice, it is very hard to trigger any bad behavior.
Currently, if we make a second request, it will always be to
the same URL (even in the face of redirects, because curl
handles the redirects internally). And we almost always
retry on HTTP_REAUTH these days. The one exception is if we
are streaming a large RPC request to the server (e.g., a
pushed packfile), in which case we cannot restart. It's
extremely unlikely to see a 401 response at this stage,
though, as we would typically have seen it when we sent a
probe request, before streaming the data.

This patch drops the automatic prompt out of case 2, and
instead requires the caller to do it. This is a few extra
lines of code, and the bug it fixes is unlikely to come up
in practice. But it is conceptually cleaner, and paves the
way for better handling of credentials across redirects.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-10-14 16:55:13 -07:00
Eric Wong
a15d069a19 http: enable keepalive on TCP sockets
This is a follow up to commit e47a8583 (enable SO_KEEPALIVE for
connected TCP sockets, 2011-12-06).

Sockets may never receive notification of some link errors,
causing "git fetch" or similar processes to hang forever.
Enabling keepalive messages allows hung processes to error out
after a few minutes/hours depending on the keepalive settings of
the system.

I noticed this problem with some non-interactive cronjobs getting
hung when talking to HTTP servers.

Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-10-14 07:03:59 -07:00
Jeff King
1bbcc224cc http: refactor options to http_get_*
Over time, the http_get_strbuf function has grown several
optional parameters. We now have a bitfield with multiple
boolean options, as well as an optional strbuf for returning
the content-type of the response. And a future patch in this
series is going to add another strbuf option.

Treating these as separate arguments has a few downsides:

  1. Most call sites need to add extra NULLs and 0s for the
     options they aren't interested in.

  2. The http_get_* functions are actually wrappers around
     2 layers of low-level implementation functions. We have
     to pass these options through individually.

  3. The http_get_strbuf wrapper learned these options, but
     nobody bothered to do so for http_get_file, even though
     it is backed by the same function that does understand
     the options.

Let's consolidate the options into a single struct. For the
common case of the default options, we'll allow callers to
simply pass a NULL for the options struct.

The resulting code is often a few lines longer, but it ends
up being easier to read (and to change as we add new
options, since we do not need to update each call site).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-09-30 17:21:59 -07:00
Jeff King
132b70a2ed http_request: factor out curlinfo_strbuf
When we retrieve the content-type of an http response, curl
gives us a pointer to internal storage, which we then copy
into a strbuf. Let's factor out the get-and-copy routine,
which can be used for getting other curl info.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-09-30 13:04:45 -07:00
Jeff King
3d1fb769b2 http_get_file: style fixes
Besides being ugly, the extra parentheses are idiomatic for
suppressing compiler warnings when we are assigning within a
conditional. We aren't doing that here, and they just
confuse the reader.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-09-30 13:04:31 -07:00
Junio C Hamano
a0a08d48d0 Merge branch 'jc/url-match'
Allow section.<urlpattern>.var configuration variables to be
treated as a "virtual" section.var given a URL, and use the
mechanism to enhance http.* configuration variables.

This is a reroll of Kyle J. McKay's work.

* jc/url-match:
  builtin/config.c: compilation fix
  config: "git config --get-urlmatch" parses section.<url>.key
  builtin/config: refactor collect_config()
  config: parse http.<url>.<variable> using urlmatch
  config: add generic callback wrapper to parse section.<url>.key
  config: add helper to normalize and match URLs
  http.c: fix parsing of http.sslCertPasswordProtected variable
2013-09-09 14:50:36 -07:00
Kyle J. McKay
6a56993b2e config: parse http.<url>.<variable> using urlmatch
Use the urlmatch_config_entry() to wrap the underlying
http_options() two-level variable parser in order to set
http.<variable> to the value with the most specific URL in the
configuration.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Kyle J. McKay <mackyle@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-05 16:02:03 -07:00
Junio C Hamano
3f4ccd2b0b http.c: fix parsing of http.sslCertPasswordProtected variable
The existing code triggers only when the configuration variable is
set to true.  Once the variable is set to true in a more generic
configuration file (e.g. ~/.gitconfig), it cannot be overriden to
false in the repository specific one (e.g. .git/config).

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-31 12:09:13 -07:00
Dave Borowitz
912b2acf2f http: add http.savecookies option to write out HTTP cookies
HTTP servers may send Set-Cookie headers in a response and expect them
to be set on subsequent requests. By default, libcurl behavior is to
store such cookies in memory and reuse them across requests within a
single session. However, it may also make sense, depending on the
server and the cookies, to store them across sessions. Provide users
an option to enable this behavior, writing cookies out to the same
file specified in http.cookiefile.

Signed-off-by: Dave Borowitz <dborowitz@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-30 09:19:04 -07:00
Junio C Hamano
dc2ed04c23 Merge branch 'bc/http-keep-memory-given-to-curl'
Older cURL wanted piece of memory we call it with to be stable, but
we updated the auth material after handing it to a call.

* bc/http-keep-memory-given-to-curl:
  http.c: don't rewrite the user:passwd string multiple times
2013-06-27 14:29:49 -07:00
Brandon Casey
a94cf2cb7e http.c: don't rewrite the user:passwd string multiple times
Curl older than 7.17 (RHEL 4.X provides 7.12 and RHEL 5.X provides
7.15) requires that we manage any strings that we pass to it as
pointers.  So, we really shouldn't be modifying this strbuf after we
have passed it to curl.

Our interaction with curl is currently safe (before or after this
patch) since the pointer that is passed to curl is never invalidated;
it is repeatedly rewritten with the same sequence of characters but
the strbuf functions never need to allocate a larger string, so the
same memory buffer is reused.

This "guarantee" of safety is somewhat subtle and could be overlooked
by someone who may want to add a more complex handling of the username
and password.  So, let's stop modifying this strbuf after we have
passed it to curl, but also leave a note to describe the assumptions
that have been made about username/password lifetime and to draw
attention to the code.

Signed-off-by: Brandon Casey <drafnel@gmail.com>
Acked-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-06-19 10:00:26 -07:00
Junio C Hamano
574d51b575 Merge branch 'mv/ssl-ftp-curl'
Does anybody really use commit walkers over (s)ftp?

* mv/ssl-ftp-curl:
  Support FTP-over-SSL/TLS for regular FTP
2013-04-19 13:31:08 -07:00
Jeff King
b793acf14c http: set curl FAILONERROR each time we select a handle
Because we reuse curl handles for multiple requests, the
setup of a handle happens in two stages: stable, global
setup and per-request setup. The lifecycle of a handle is
something like:

  1. get_curl_handle; do basic global setup that will last
     through the whole program (e.g., setting the user
     agent, ssl options, etc)

  2. get_active_slot; set up a per-request baseline (e.g.,
     clearing the read/write functions, making it a GET
     request, etc)

  3. perform the request with curl_*_perform functions

  4. goto step 2 to perform another request

Breaking it down this way means we can avoid doing global
setup from step (1) repeatedly, but we still finish step (2)
with a predictable baseline setup that callers can rely on.

Until commit 6d052d7 (http: add HTTP_KEEP_ERROR option,
2013-04-05), setting curl's FAILONERROR option was a global
setup; we never changed it. However, 6d052d7 introduced an
option where some requests might turn off FAILONERROR. Later
requests using the same handle would have the option
unexpectedly turned off, which meant they would not notice
http failures at all.

This could easily be seen in the test-suite for the
"half-auth" cases of t5541 and t5551. The initial requests
turned off FAILONERROR, which meant it was erroneously off
for the rpc POST. That worked fine for a successful request,
but meant that we failed to react properly to the HTTP 401
(instead, we treated whatever the server handed us as a
successful message body).

The solution is simple: now that FAILONERROR is a
per-request setting, we move it to get_active_slot to make
sure it is reset for each request.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-16 10:13:46 -07:00
Modestas Vainius
4bc444eb64 Support FTP-over-SSL/TLS for regular FTP
Add a boolean http.sslTry option which allows to enable AUTH SSL/TLS and
encrypted data transfers when connecting via regular FTP protocol.

Default is false since it might trigger certificate verification errors on
misconfigured servers.

Signed-off-by: Modestas Vainius <modestas@vainius.eu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-12 08:52:23 -07:00
Jeff King
4df13f69e9 http: drop http_error function
This function is a single-liner and is only called from one
place. Just inline it, which makes the code more obvious.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-06 18:56:46 -07:00
Jeff King
39a570f26c http: re-word http error message
When we report an http error code, we say something like:

  error: The requested URL reported failure: 403 Forbidden while accessing http://example.com/repo.git

Everything between "error:" and "while" is written by curl,
and the resulting sentence is hard to read (especially
because there is no punctuation between curl's sentence and
the remainder of ours). Instead, let's re-order this to give
better flow:

  error: unable to access 'http://example.com/repo.git: The requested URL reported failure: 403 Forbidden

This is still annoyingly long, but at least reads more
clearly left to right.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-06 18:56:45 -07:00
Jeff King
67d2a7b5c5 http: simplify http_error helper function
This helper function should really be a one-liner that
prints an error message, but it has ended up unnecessarily
complicated:

  1. We call error() directly when we fail to start the curl
     request, so we must later avoid printing a duplicate
     error in http_error().

     It would be much simpler in this case to just stuff the
     error message into our usual curl_errorstr buffer
     rather than printing it ourselves. This means that
     http_error does not even have to care about curl's exit
     value (the interesting part is in the errorstr buffer
     already).

  2. We return the "ret" value passed in to us, but none of
     the callers actually cares about our return value. We
     can just drop this entirely.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-06 18:56:44 -07:00
Jeff King
6d052d78d7 http: add HTTP_KEEP_ERROR option
We currently set curl's FAILONERROR option, which means that
any http failures are reported as curl errors, and the
http body content from the server is thrown away.

This patch introduces a new option to http_get_strbuf which
specifies that the body content from a failed http response
should be placed in the destination strbuf, where it can be
accessed by the caller.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-06 18:56:41 -07:00
Jeff King
047ec60205 pkt-line: move LARGE_PACKET_MAX definition from sideband
Having the packet sizes defined near the packet read/write
functions makes more sense.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-20 13:42:22 -08:00
Jeff King
3443db51a0 http_request: reset "type" strbuf before adding
Callers may pass us a strbuf which we use to record the
content-type of the response. However, we simply appended to
it rather than overwriting its contents, meaning that cruft
in the strbuf gave us a bogus type. E.g., the multiple
requests triggered by http_request could yield a type like
"text/plainapplication/x-git-receive-pack-advertisement".

Reported-by: Michael Schubert <mschub@elegosoft.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-06 07:50:56 -08:00
Shawn Pearce
4656bf47fc Verify Content-Type from smart HTTP servers
Before parsing a suspected smart-HTTP response verify the returned
Content-Type matches the standard. This protects a client from
attempting to process a payload that smells like a smart-HTTP
server response.

JGit has been doing this check on all responses since the dawn of
time. I mistakenly failed to include it in git-core when smart HTTP
was introduced. At the time I didn't know how to get the Content-Type
from libcurl. I punted, meant to circle back and fix this, and just
plain forgot about it.

Signed-off-by: Shawn Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-04 10:22:36 -08:00
Junio C Hamano
8bc714b408 Merge branch 'rb/http-cert-cred-no-username-prompt' into maint
* rb/http-cert-cred-no-username-prompt:
  http.c: Avoid username prompt for certifcate credentials
2013-01-10 14:03:54 -08:00
Rene Bredlau
75e9a405d4 http.c: Avoid username prompt for certifcate credentials
If sslCertPasswordProtected is set to true do not ask for username to decrypt rsa key. This question is pointless, the key is only protected by a password. Internaly the username is simply set to "".

Signed-off-by: Rene Bredlau <git@unrelated.de>
Acked-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-12-21 10:19:40 -08:00
Jeff King
23a50a1fb1 Merge branch 'sz/maint-curl-multi-timeout'
Sometimes curl_multi_timeout() function suggested a wrong timeout
value when there is no file descriptors to wait on and the http
transport ended up sleeping for minutes in select(2) system call.
Detect this and reduce the wait timeout in such a case.

* sz/maint-curl-multi-timeout:
  Fix potential hang in https handshake
2012-11-09 12:50:56 -05:00
Jeff King
58f3f9893d Merge branch 'jk/maint-http-init-not-in-result-handler'
Further clean-up to the http codepath that picks up results after
cURL library is done with one request slot.

* jk/maint-http-init-not-in-result-handler:
  http: do not set up curl auth after a 401
  remote-curl: do not call run_slot repeatedly
2012-10-29 04:13:09 -04:00
Stefan Zager
7202b81ffc Fix potential hang in https handshake
It has been observed that curl_multi_timeout may return a very long
timeout value (e.g., 294 seconds and some usec) just before
curl_multi_fdset returns no file descriptors for reading.  The
upshot is that select() will hang for a long time -- long enough for
an https handshake to be dropped.  The observed behavior is that
the git command will hang at the terminal and never transfer any
data.

This patch is a workaround for a probable bug in libcurl.  The bug
only seems to manifest around a very specific set of circumstances:

- curl version (from curl/curlver.h):

 #define LIBCURL_VERSION_NUM 0x071307

- git-remote-https running on an ubuntu-lucid VM.
- Connecting through squid proxy running on another VM.

Interestingly, the problem doesn't manifest if a host connects
through squid proxy running on localhost; only if the proxy is on
a separate VM (not sure if the squid host needs to be on a separate
physical machine).  That would seem to suggest that this issue
is timing-sensitive.

This patch is more or less in line with a recommendation in the
curl docs about how to behave when curl_multi_fdset doesn't return
and file descriptors:

http://curl.haxx.se/libcurl/c/curl_multi_fdset.html

Signed-off-by: Stefan Zager <szager@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-10-19 14:15:17 -07:00
Junio C Hamano
e98fa647aa Merge branch 'jk/maint-http-half-auth-push' into maint
* jk/maint-http-half-auth-push:
  http: fix segfault in handle_curl_result
2012-10-17 10:29:24 -07:00
Junio C Hamano
053a08f5bb Merge branch 'jk/maint-http-half-auth-push'
Fixes a regression in maint-1.7.11 (v1.7.11.7), maint (v1.7.12.1)
and master (v1.8.0-rc0).

* jk/maint-http-half-auth-push:
  http: fix segfault in handle_curl_result
2012-10-16 11:44:37 -07:00
Jeff King
1960897ebc http: do not set up curl auth after a 401
When we get an http 401, we prompt for credentials and put
them in our global credential struct. We also feed them to
the curl handle that produced the 401, with the intent that
they will be used on a retry.

When the code was originally introduced in commit 42653c0,
this was a necessary step. However, since dfa1725, we always
feed our global credential into every curl handle when we
initialize the slot with get_active_slot. So every further
request already feeds the credential to curl.

Moreover, accessing the slot here is somewhat dubious. After
the slot has produced a response, we don't actually control
it any more.  If we are using curl_multi, it may even have
been re-initialized to handle a different request.

It just so happens that we will reuse the curl handle within
the slot in such a case, and that because we only keep one
global credential, it will be the one we want.  So the
current code is not buggy, but it is misleading.

By cleaning it up, we can remove the slot argument entirely
from handle_curl_result, making it much more obvious that
slots should not be accessed after they are marked as
finished.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-10-12 09:45:15 -07:00
Jeff King
188923f0d1 http: fix segfault in handle_curl_result
When we create an http active_request_slot, we can set its
"results" pointer back to local storage. The http code will
fill in the details of how the request went, and we can
access those details even after the slot has been cleaned
up.

Commit 8809703 (http: factor out http error code handling)
switched us from accessing our local results struct directly
to accessing it via the "results" pointer of the slot. That
means we're accessing the slot after it has been marked as
finished, defeating the whole purpose of keeping the results
storage separate.

Most of the time this doesn't matter, as finishing the slot
does not actually clean up the pointer. However, when using
curl's multi interface with the dumb-http revision walker,
we might actually start a new request before handing control
back to the original caller. In that case, we may reuse the
slot, zeroing its results pointer, and leading the original
caller to segfault while looking for its results inside the
slot.

Instead, we need to pass a pointer to our local results
storage to the handle_curl_result function, rather than
relying on the pointer in the slot struct. This matches what
the original code did before the refactoring (which did not
use a separate function, and therefore just accessed the
results struct directly).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-10-12 09:42:31 -07:00
Shawn O. Pearce
aa90b9697f Enable info/refs gzip decompression in HTTP client
Some HTTP servers try to use gzip compression on the /info/refs
request to save transfer bandwidth. Repositories with many tags
may find the /info/refs request can be gzipped to be 50% of the
original size due to the few but often repeated bytes used (hex
SHA-1 and commonly digits in tag names).

For most HTTP requests enable "Accept-Encoding: gzip" ensuring
the /info/refs payload can use this encoding format.

Only request gzip encoding from servers. Although deflate is
supported by libcurl, most servers have standardized on gzip
encoding for compression as that is what most browsers support.
Asking for deflate increases request sizes by a few bytes, but is
unlikely to ever be used by a server.

Disable the Accept-Encoding header on probe RPCs as response bodies
are supposed to be exactly 4 bytes long, "0000". The HTTP headers
requesting and indicating compression use more space than the data
transferred in the body.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-09-20 10:26:50 -07:00
Junio C Hamano
3503e9ab32 Merge branch 'maint-1.7.11' into maint 2012-09-12 14:08:05 -07:00
Junio C Hamano
7d9483c299 Merge branch 'jk/maint-http-half-auth-push' into maint-1.7.11
Pushing to smart HTTP server with recent Git fails without having
the username in the URL to force authentication, if the server is
configured to allow GET anonymously, while requiring authentication
for POST.

* jk/maint-http-half-auth-push:
  http: prompt for credentials on failed POST
  http: factor out http error code handling
  t: test http access to "half-auth" repositories
  t: test basic smart-http authentication
  t/lib-httpd: recognize */smart/* repos as smart-http
  t/lib-httpd: only route auth/dumb to dumb repos
  t5550: factor out http auth setup
  t5550: put auth-required repo in auth/dumb
2012-09-12 13:58:23 -07:00
Jeff King
8809703072 http: factor out http error code handling
Most of our http requests go through the http_request()
interface, which does some nice post-processing on the
results. In particular, it handles prompting for missing
credentials as well as approving and rejecting valid or
invalid credentials. Unfortunately, it only handles GET
requests. Making it handle POSTs would be quite complex, so
let's pull result handling code into its own function so
that it can be reused from the POST code paths.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-08-27 10:49:09 -07:00
Joachim Schmitz
4246b0bd90 http.c: don't use curl_easy_strerror prior to curl-7.12.0
Reverts be22d92 (http: avoid empty error messages for some curl
errors, 2011-09-05) on platforms with older versions of libcURL
where the function is not available.

Signed-off-by: Joachim Schmitz <jojo@schmitz-digital.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-08-23 14:23:18 -07:00
Jeff King
745c7c8e62 http: get default user-agent from git_user_agent
This means we will respect the GIT_USER_AGENT build-time
configuration and run-time environment variable.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-06-03 13:11:54 -07:00
Pete Wyckoff
82247e9bd5 remove superfluous newlines in error messages
The error handling routines add a newline.  Remove
the duplicate ones in error messages.

Signed-off-by: Pete Wyckoff <pw@padd.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-30 15:45:51 -07:00
Jeff King
6f4c347ca1 http: use newer curl options for setting credentials
We give the username and password to curl by sticking them
in a buffer of the form "user:pass" and handing the result
to CURLOPT_USERPWD. Since curl 7.19.1, there is a split
mechanism, where you can specify each element individually.

This has the advantage that a username can contain a ":"
character. It also is less code for us, since we can hand
our strings over to curl directly. And since curl 7.17.0 and
higher promise to copy the strings for us, we we don't even
have to worry about memory ownership issues.

Unfortunately, we have to keep the ugly code for old curl
around, but as it is now nicely #if'd out, we can easily get
rid of it when we decide that 7.19.1 is "old enough".

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-14 16:04:25 -07:00
Jeff King
aa0834a04e http: clean up leak in init_curl_http_auth
When we have a credential to give to curl, we must copy it
into a "user:pass" buffer and then hand the buffer to curl.
Old versions of curl did not copy the buffer, and we were
expected to keep it valid. Newer versions of curl will copy
the buffer.

Our solution was to use a strbuf and detach it, giving
ownership of the resulting buffer to curl. However, this
meant that we were leaking the buffer on newer versions of
curl, since curl was just copying it and throwing away the
string we passed. Furthermore, when we replaced a
credential (e.g., because our original one was rejected), we
were also leaking on both old and new versions of curl.

This got even worse in the last patch, which started
replacing the credential (and thus leaking) on every http
request.

Instead, let's use a static buffer to make the ownership
more clear and less leaky.  We already keep a static "struct
credential", so we are only handling a single credential at
a time, anyway.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-14 16:04:24 -07:00
Jeff King
dfa1725a3e fix http auth with multiple curl handles
HTTP authentication is currently handled by get_refs and fetch_ref, but
not by fetch_object, fetch_pack or fetch_alternates. In the
single-threaded case, this is not an issue, since get_refs is always
called first. It recognigzes the 401 and prompts the user for
credentials, which will then be used subsequently.

If the curl multi interface is used, however, only the multi handle used
by get_refs will have credentials configured. Requests made by other
handles fail with an authentication error.

Fix this by setting CURLOPT_USERPWD whenever a slot is requested.

Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-10 09:12:13 -07:00
Jim Meyering
a7793a7491 correct spelling: an URL -> a URL
Signed-off-by: Jim Meyering <meyering@redhat.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-03-28 08:47:23 -07:00
Nelson Benitez Leon
dd6139971a http: support proxies that require authentication
When the proxy server specified by the http.proxy configuration or the
http_proxy environment variable requires authentication, git failed to
connect to the proxy, because we did not configure the cURL handle with
CURLOPT_PROXYAUTH.

When a proxy is in use, and you tell git that the proxy requires
authentication by having username in the http.proxy configuration, an
extra request needs to be made to the proxy to find out what
authentication method it supports, as this patch uses CURLAUTH_ANY to let
the library pick the most secure method supported by the proxy server.

The extra round-trip adds extra latency, but relieves the user from the
burden to configure a specific authentication method.  If it becomes
problem, a later patch could add a configuration option to specify what
method to use, but let's start simple for the time being.

Signed-off-by: Nelson Benitez Leon <nbenitezl@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-03-02 14:40:14 -08:00
Junio C Hamano
1d3a035d6d Merge branch 'jk/maint-push-over-dav'
* jk/maint-push-over-dav:
  http-push: enable "proactive auth"
  t5540: test DAV push with authentication

Conflicts:
	http.c
2011-12-19 16:05:59 -08:00
Junio C Hamano
367d20ec6b Merge branch 'jk/credentials'
* jk/credentials:
  t: add test harness for external credential helpers
  credentials: add "store" helper
  strbuf: add strbuf_add*_urlencode
  Makefile: unix sockets may not available on some platforms
  credentials: add "cache" helper
  docs: end-user documentation for the credential subsystem
  credential: make relevance of http path configurable
  credential: add credential.*.username
  credential: apply helper config
  http: use credential API to get passwords
  credential: add function for parsing url components
  introduce credentials API
  t5550: fix typo
  test-lib: add test_config_global variant

Conflicts:
	strbuf.c
2011-12-19 16:05:16 -08:00
Jeff King
a4ddbc33d7 http-push: enable "proactive auth"
Before commit 986bbc08, git was proactive about asking for
http passwords. It assumed that if you had a username in
your URL, you would also want a password, and asked for it
before making any http requests.

However, this could interfere with the use of .netrc (see
986bbc08 for details). And it was also unnecessary, since
the http fetching code had learned to recognize an HTTP 401
and prompt the user then. Furthermore, the proactive prompt
could interfere with the usage of .netrc (see 986bbc08 for
details).

Unfortunately, the http push-over-DAV code never learned to
recognize HTTP 401, and so was broken by this change. This
patch does a quick fix of re-enabling the "proactive auth"
strategy only for http-push, leaving the dumb http fetch and
smart-http as-is.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-12-13 16:34:44 -08:00
Jeff King
148bb6a7b4 http: use credential API to get passwords
This patch converts the http code to use the new credential
API, both for http authentication as well as for getting
certificate passwords.

Most of the code change is simply variable naming (the
passwords are now contained inside the credential struct)
or deletion of obsolete code (the credential code handles
URL parsing and prompting for us).

The behavior should be the same, with one exception: the
credential code will prompt with a description based on the
credential components. Therefore, the old prompt of:

  Username for 'example.com':
  Password for 'example.com':

now looks like:

  Username for 'https://example.com/repo.git':
  Password for 'https://user@example.com/repo.git':

Note that we include more information in each line,
specifically:

  1. We now include the protocol. While more noisy, this is
     an important part of knowing what you are accessing
     (especially if you care about http vs https).

  2. We include the username in the password prompt. This is
     not a big deal when you have just been prompted for it,
     but the username may also come from the remote's URL
     (and after future patches, from configuration or
     credential helpers).  In that case, it's a nice
     reminder of the user for which you're giving the
     password.

  3. We include the path component of the URL. In many
     cases, the user won't care about this and it's simply
     noise (i.e., they'll use the same credential for a
     whole site). However, that is part of a larger
     question, which is whether path components should be
     part of credential context, both for prompting and for
     lookup by storage helpers. That issue will be addressed
     as a whole in a future patch.

Similarly, for unlocking certificates, we used to say:

  Certificate Password for 'example.com':

and we now say:

  Password for 'cert:///path/to/certificate':

Showing the path to the client certificate makes more sense,
as that is what you are unlocking, not "example.com".

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-12-11 23:16:24 -08:00
Junio C Hamano
c4c9a63b54 Merge branch 'mf/curl-select-fdset'
* mf/curl-select-fdset:
  http: drop "local" member from request struct
  http.c: Rely on select instead of tracking whether data was received
  http.c: Use timeout suggested by curl instead of fixed 50ms timeout
  http.c: Use curl_multi_fdset to select on curl fds instead of just sleeping
2011-12-05 15:10:28 -08:00
Ramkumar Ramachandra
620771c83e http: remove unused function hex()
Signed-off-by: Ramkumar Ramachandra <artagnon@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-15 16:08:48 -08:00
Jeff King
093c44a360 http: drop "local" member from request struct
This is a FILE pointer in the case that we are sending our
output to a file. We originally used it to run ftell() to
determine whether data had been written to our file during
our last call to curl. However, as of the last patch, we no
longer care about that flag anymore. All uses of this struct
member are now just book-keeping that can go away.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-04 12:05:01 -07:00
Mika Fischer
df26c47127 http.c: Rely on select instead of tracking whether data was received
Since now select is used with the file descriptors of the http connections,
tracking whether data was received recently (and trying to read more in
that case) is no longer necessary. Instead, always call select and rely on
it to return as soon as new data can be read.

Signed-off-by: Mika Fischer <mika.fischer@zoopnet.de>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-04 10:47:13 -07:00
Mika Fischer
eb56c82163 http.c: Use timeout suggested by curl instead of fixed 50ms timeout
Recent versions of curl can suggest a period of time the library user
should sleep and try again, when curl is blocked on reading or writing
(or connecting). Use this timeout instead of always sleeping for 50ms.

Signed-off-by: Mika Fischer <mika.fischer@zoopnet.de>
Helped-by: Daniel Stenberg <daniel@haxx.se>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-04 10:46:56 -07:00
Mika Fischer
6f9dd67ffe http.c: Use curl_multi_fdset to select on curl fds instead of just sleeping
Instead of sleeping unconditionally for a 50ms, when no data can be read
from the http connection(s), use curl_multi_fdset() to obtain the actual
file descriptors of the open connections and use them in the select call.
This way, the 50ms sleep is interrupted when new data arrives.

Signed-off-by: Mika Fischer <mika.fischer@zoopnet.de>
Helped-by: Daniel Stenberg <daniel@haxx.se>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-04 10:46:25 -07:00
Stefan Naewe
986bbc0842 http: don't always prompt for password
When a username is already specified at the beginning of any HTTP
transaction (e.g. "git push https://user@hosting.example.com/project.git"
or "git ls-remote https://user@hosting.example.com/project.git"), the code
interactively asks for a password before calling into the libcurl library.
It is very likely that the reason why user included the username in the
URL is because the user knows that it would require authentication to
access the resource. Asking for the password upfront would save one
roundtrip to get a 401 response, getting the password and then retrying
the request. This is a reasonable optimization.

HOWEVER.

This is done even when $HOME/.netrc might have a corresponding entry to
access the site, or the site does not require authentication to access the
resource after all. But neither condition can be determined until we call
into libcurl library (we do not read and parse $HOME/.netrc ourselves). In
these cases, the user is forced to respond to the password prompt, only to
give a password that is not used in the HTTP transaction. If the password
is in $HOME/.netrc, an empty input would later let the libcurl layer to
pick up the password from there, and if the resource does not require
authentication, any input would be taken and then discarded without
getting used. It is wasteful to ask this unused information to the end
user.

Reduce the confusion by not trying to optimize for this case and always
incur roundtrip penalty. An alternative might be to document this and keep
this round-trip optimization as-is.

Signed-off-by: Stefan Naewe <stefan.naewe@gmail.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-04 09:47:18 -07:00
Junio C Hamano
963838402a Merge branch 'jk/http-auth'
* jk/http-auth:
  http_init: accept separate URL parameter
  http: use hostname in credential description
  http: retry authentication failures for all http requests
  remote-curl: don't retry auth failures with dumb protocol
  improve httpd auth tests
  url: decode buffers that are not NUL-terminated
2011-10-17 21:37:15 -07:00
Jeff King
deba49377b http_init: accept separate URL parameter
The http_init function takes a "struct remote". Part of its
initialization procedure is to look at the remote's url and
grab some auth-related parameters. However, using the url
included in the remote is:

  - wrong; the remote-curl helper may have a separate,
    unrelated URL (e.g., from remote.*.pushurl). Looking at
    the remote's configured url is incorrect.

  - incomplete; http-fetch doesn't have a remote, so passes
    NULL. So http_init never gets to see the URL we are
    actually going to use.

  - cumbersome; http-push has a similar problem to
    http-fetch, but actually builds a fake remote just to
    pass in the URL.

Instead, let's just add a separate URL parameter to
http_init, and all three callsites can pass in the
appropriate information.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-10-15 21:18:36 -07:00
Michael J Gruber
070b4dd589 http: use hostname in credential description
Until now, a request for an http password looked like:

  Username:
  Password:

Now it will look like:

  Username for 'example.com':
  Password for 'example.com':

Picked-from: Jeff King <peff@peff.net>
Signed-off-by: Michael J Gruber <git@drmicha.warpmail.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-10-15 21:18:20 -07:00
Junio C Hamano
9488c18923 Merge branch 'jn/maint-http-error-message'
* jn/maint-http-error-message:
  http: avoid empty error messages for some curl errors
  http: remove extra newline in error message
2011-10-10 15:56:17 -07:00
Jonathan Nieder
be22d92eac http: avoid empty error messages for some curl errors
When asked to fetch over SSL without a valid
/etc/ssl/certs/ca-certificates.crt file, "git fetch" writes

	error:  while accessing https://github.com/torvalds/linux.git/info/refs

which is a little disconcerting.  Better to fall back to
curl_easy_strerror(result) when the error string is empty, like the
curl utility does:

	error: Problem with the SSL CA cert (path? access rights?) while
	accessing https://github.com/torvalds/linux.git/info/refs

Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-09-06 15:49:23 -07:00
Jonathan Nieder
8abc508222 http: remove extra newline in error message
There is no need for a blank line between the detailed error message
and the later "fatal: HTTP request failed" notice.  Keep the newline
written by error() itself and eliminate the extra one.

Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-09-06 15:48:49 -07:00
Junio C Hamano
0e9b12f874 Merge branch 'rc/maint-http-wrong-free'
* rc/maint-http-wrong-free:
  Makefile: some changes for http-related flag documentation
  http.c: fix an invalid free()

Conflicts:
	Makefile
2011-08-11 11:03:13 -07:00
Tay Ray Chuan
ec99c9a89a http.c: fix an invalid free()
Remove a free() on the static buffer returned by sha1_file_name().

While we're at it, replace xmalloc() calls on the structs
http_(object|pack)_request with xcalloc() so that pointers in the
structs get initialized to NULL. That way, free()'s are safe - for
example, a free() on the url string member when aborting.

This fixes an invalid free().

Reported-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Helped-by: Jeff King peff@peff.net
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-08-03 11:23:09 -07:00
Jeff King
8d677edc4f http: retry authentication failures for all http requests
Commit 42653c0 (Prompt for a username when an HTTP request
401s, 2010-04-01) changed http_get_strbuf to prompt for
credentials when we receive a 401, but didn't touch
http_get_file. The latter is called only for dumb http;
while it's usually the case that people don't use
authentication on top of dumb http, there is no reason not
to allow both types of requests to use this feature.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-07-20 11:38:35 -07:00
Jeff King
66c8448543 url: decode buffers that are not NUL-terminated
The url_decode function needs only minor tweaks to handle
arbitrary buffers. Let's do those tweaks, which cleans up an
unreadable mess of temporary strings in http.c.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-07-20 11:38:34 -07:00
Duncan Brown
bcfb95dde4 http: pass http.cookiefile using CURLOPT_COOKIEFILE
If the config option http.cookiefile is set, pass this file to libCURL using
the CURLOPT_COOKIEFILE option. This is similar to calling curl with the -b
option.  This allows git http authorization with authentication mechanisms
that use cookies, such as SAML Enhanced Client or Proxy (ECP) used by
Shibboleth.

To use SAML/ECP, the user needs to request a session cookie with their own ECP
code. See for example:

<https://wiki.shibboleth.net/confluence/display/SHIB2/ECP>

Once the cookie file has been created, it can be passed to git with, e.g.

git config --global http.cookiefile "/home/dbrown/.curlcookies"

libCURL will then pass the appropriate session cookies to the git http server.

Signed-off-by: Duncan Brown <duncan.brown@ligo.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-06-03 09:29:19 -07:00
Junio C Hamano
419272d87b Merge branch 'sp/maint-clear-postfields' into maint
* sp/maint-clear-postfields:
  http: clear POSTFIELDS when initializing a slot
2011-05-04 14:58:56 -07:00
Dan McGee
a04ff3ec32 http: make curl callbacks match contracts from curl header
Yes, these don't match perfectly with the void* first parameter of the
fread/fwrite in the standard library, but they do match the curl
expected method signature. This is needed when a refactor passes a
curl_write_callback around, which would otherwise give incorrect
parameter warnings.

Signed-off-by: Dan McGee <dpmcgee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-05-04 13:30:28 -07:00
Junio C Hamano
11c3e2b7bd Merge branch 'sp/maint-clear-postfields'
* sp/maint-clear-postfields:
  http: clear POSTFIELDS when initializing a slot
2011-04-28 14:10:51 -07:00
Junio C Hamano
1e41827d2d http: clear POSTFIELDS when initializing a slot
After posting a short request using CURLOPT_POSTFIELDS, if the slot
is reused for posting a large payload, the slot ends up having both
POSTFIELDS (which now points at a random garbage) and READFUNCTION,
in which case the curl library tries to use the stale POSTFIELDS.

Clear it as part of the general slot initialization in get_active_slot().

Heavylifting-by: Shawn Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Acked-by: Shawn Pearce <spearce@spearce.org>
2011-04-26 10:44:33 -07:00
Junio C Hamano
a0078dee90 Merge branch 'tc/http-urls-ends-with-slash'
* tc/http-urls-ends-with-slash:
  http-fetch: rework url handling
  http-push: add trailing slash at arg-parse time, instead of later on
  http-push: check path length before using it
  http-push: Normalise directory names when pushing to some WebDAV servers
  http-backend: use end_url_with_slash()
  url: add str wrapper for end_url_with_slash()
  shift end_url_with_slash() from http.[ch] to url.[ch]
  t5550-http-fetch: add test for http-fetch
  t5550-http-fetch: add missing '&&'
2010-12-12 21:49:52 -08:00
Junio C Hamano
16c06fcb39 Merge branch 'gc/http-with-non-ascii-username-url'
* gc/http-with-non-ascii-username-url:
  Fix username and password extraction from HTTP URLs
  t5550: test HTTP authentication and userinfo decoding

Conflicts:
	t/lib-httpd/apache.conf
2010-12-08 11:24:14 -08:00
Tay Ray Chuan
1966d9f37b shift end_url_with_slash() from http.[ch] to url.[ch]
This allows non-http/curl users to access it too (eg. http-backend.c).

Update include headers in end_url_with_slash() users too.

Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-11-26 14:50:45 -08:00
Gabriel Corona
f39f72d8cf Fix username and password extraction from HTTP URLs
Change the authentification initialisation to percent-decode username
and password for HTTP URLs.

Signed-off-by: Gabriel Corona <gabriel.corona@enst-bretagne.fr>
Acked-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-11-17 13:07:43 -08:00
Tay Ray Chuan
311e2ea006 smart-http: Don't change POST to GET when following redirect
For a long time (29508e1 "Isolate shared HTTP request functionality", Fri
Nov 18 11:02:58 2005), we've followed HTTP redirects with
CURLOPT_FOLLOWLOCATION.

However, when the remote HTTP server returns a redirect the default
libcurl action is to change a POST request into a GET request while
following the redirect, but the remote http backend does not expect
that.

Fix this by telling libcurl to always keep the request as type POST with
CURLOPT_POSTREDIR.

For users of libcurl older than 7.19.1, use CURLOPT_POST301 instead,
which only follows 301s instead of both 301s and 302s.

Signed-off-by: Andreas Schwab <schwab@linux-m68k.org>
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-09-27 11:38:55 -07:00
Spencer E. Olson
b1d1058cc3 Allow HTTP user agent string to be modified.
Some firewalls restrict HTTP connections based on the clients user agent.  This
commit provides the user the ability to modify the user agent string via either
a new config option (http.useragent) or by an environment variable
(GIT_HTTP_USER_AGENT).

Relevant documentation is added to Documentation/config.txt.

Signed-off-by: Spencer E. Olson <olsonse@umich.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-08-11 14:07:31 -07:00
Junio C Hamano
035bf8d7c4 Merge branch 'sp/maint-dumb-http-pack-reidx'
* sp/maint-dumb-http-pack-reidx:
  http.c::new_http_pack_request: do away with the temp variable filename
  http-fetch: Use temporary files for pack-*.idx until verified
  http-fetch: Use index-pack rather than verify-pack to check packs
  Allow parse_pack_index on temporary files
  Extract verify_pack_index for reuse from verify_pack
  Introduce close_pack_index to permit replacement
  http.c: Remove unnecessary strdup of sha1_to_hex result
  http.c: Don't store destination name in request structures
  http.c: Drop useless != NULL test in finish_http_pack_request
  http.c: Tiny refactoring of finish_http_pack_request
  t5550-http-fetch: Use subshell for repository operations
  http.c: Remove bad free of static block
2010-05-21 04:02:19 -07:00
Junio C Hamano
3cc9caadf7 Merge branch 'rc/maint-curl-helper'
* rc/maint-curl-helper:
  remote-curl: ensure that URLs have a trailing slash
  http: make end_url_with_slash() public
  t5541-http-push: add test for URLs with trailing slash

Conflicts:
	remote-curl.c
2010-05-08 22:37:24 -07:00
Tay Ray Chuan
90d0571357 http.c::new_http_pack_request: do away with the temp variable filename
Now that the temporary variable char *filename is only used in one
place, do away with it and just call sha1_pack_name() directly.

Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Acked-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-04-19 17:57:50 -07:00
Shawn O. Pearce
750ef42516 http-fetch: Use temporary files for pack-*.idx until verified
Verify that a downloaded pack-*.idx file is consistent and valid
as an index file before we rename it into its final destination.
This prevents a corrupt index file from later being treated as a
usable file, confusing readers.

Check that we do not have the pack index file before invoking
fetch_pack_index(); that way, we can do without the has_pack_index()
check in fetch_pack_index().

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-04-19 17:56:29 -07:00
Shawn O. Pearce
fe72d420ab http-fetch: Use index-pack rather than verify-pack to check packs
To ensure we don't leave a corrupt pack file positioned as though
it were a valid pack file, run index-pack on the temporary pack
before we rename it to its final name.  If index-pack crashes out
when it discovers file corruption (e.g. GitHub's error HTML at the
end of the file), simply delete the temporary files to cleanup.

By waiting until the pack has been validated before we move it
to its final name, we eliminate a race condition where another
concurrent reader might try to access the pack at the same time
that we are still trying to verify its not corrupt.

Switching from verify-pack to index-pack is a change in behavior,
but it should turn out better for users.  The index-pack algorithm
tries to minimize disk seeks, as well as the number of times any
given object is inflated, by organizing its work along delta chains.
The verify-pack logic does not attempt to do this, thrashing the
delta base cache and the filesystem cache.

By recreating the index file locally, we also can automatically
upgrade from a v1 pack table of contents to v2.  This makes the
CRC32 data available for use during later repacks, even if the
server didn't have them on hand.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Acked-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-04-19 17:56:20 -07:00
Shawn O. Pearce
7b64469a36 Allow parse_pack_index on temporary files
The easiest way to verify a pack index is to open it through the
standard parse_pack_index function, permitting the header check
to happen when the file is mapped.  However, the dumb HTTP client
needs to verify a pack index before its moved into its proper file
name within the objects/pack directory, to prevent a corrupt index
from being made available.  So permit the caller to specify the
exact path of the index file.

For now we're still using the final destination name within the
sole call site in http.c, but eventually we will start to parse
the temporary path instead.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-04-19 17:56:17 -07:00
Shawn O. Pearce
162eb5f838 http.c: Remove unnecessary strdup of sha1_to_hex result
Most of the time the dumb HTTP transport is run without the verbose
flag set, so we only need the result of sha1_to_hex(sha1) once, to
construct the pack URL.  Don't bother with an unnecessary malloc,
copy, free chain of this buffer.

If verbose is set, we'll format the SHA-1 twice now.  But this
tiny extra CPU time spent is nothing compared to the slowdown that
is usually imposed by the verbose messages being sent to the tty,
and is entirely trivial compared to the latency involved with the
remote HTTP server sending something as big as a pack file.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Acked-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-04-19 17:55:59 -07:00
Shawn O. Pearce
0da8b2e7c8 http.c: Don't store destination name in request structures
The destination name within the object store is easily computed
on demand, reusing a static buffer held by sha1_file.c.  We don't
need to copy the entire path into the request structure for safe
keeping, when it can be easily reformatted after the download has
been completed.

This reduces the size of the per-request structure, and removes
yet another PATH_MAX based limit.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-04-17 13:55:46 -07:00
Shawn O. Pearce
3065274c58 http.c: Drop useless != NULL test in finish_http_pack_request
The test preq->packfile != NULL is always true.  If packfile was
actually NULL when entering this function the ftell() above would
crash out with a SIGSEGV, resulting in never reaching this point.

Simplify the code by just removing the conditional.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-04-17 13:55:46 -07:00
Shawn O. Pearce
021ab6f00b http.c: Tiny refactoring of finish_http_pack_request
Always remove the struct packed_git from the active list, even
if the rename of the temporary file fails.

While we are here, simplify the code a bit by using a common
local variable name ("p") to hold the relevant packed_git.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-04-17 13:55:45 -07:00
Shawn O. Pearce
03b6aeb274 http.c: Remove bad free of static block
The filename variable here is pointing to a block of memory that
was allocated by sha1_file.c and is also held in a static variable
scoped within the sha1_pack_name() function.  Doing a free() here is
returning that memory to the allocator while we might still try to
reuse it on a subsequent sha1_pack_name() invocation.  That's not
acceptable, so don't free it.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-04-17 13:55:45 -07:00
Tay Ray Chuan
eb9d47cf9b http: make end_url_with_slash() public
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-04-09 21:11:09 -07:00
Scott Chacon
42653c09c8 Prompt for a username when an HTTP request 401s
When an HTTP request returns a 401, Git will currently fail with a
confusing message saying that it got a 401, which is not very
descriptive.

Currently if a user wants to use Git over HTTP, they have to use one
URL with the username in the URL (e.g. "http://user@host.com/repo.git")
for write access and another without the username for unauthenticated
read access (unless they want to be prompted for the password each
time). However, since the HTTP servers will return a 401 if an action
requires authentication, we can prompt for username and password if we
see this, allowing us to use a single URL for both purposes.

This patch changes http_request to prompt for the username and password,
then return HTTP_REAUTH so http_get_strbuf can try again.  If it gets
a 401 even when a user/pass is supplied, http_request will now return
HTTP_NOAUTH which remote_curl can then use to display a more
intelligent error message that is less confusing.

Signed-off-by: Scott Chacon <schacon@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-04-01 23:24:59 -07:00
Frank Li
f206063b4b git-core: Support retrieving passwords with GIT_ASKPASS
git tries to read a password from the terminal in imap-send and
when talking to a http server that requires authentication.

When a GUI is driving git, however, the end user is not paying
attention to the terminal (there may not even be a terminal).
GUI would appear to hang forever.

Fix this problem by allowing a password-retrieving command
to be specified in GIT_ASKPASS

Signed-off-by: Frank Li <lznuaa@gmail.com>
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-03-04 22:05:13 -08:00
Junio C Hamano
83e41e2e61 http.c: mark file-local functions static
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-01-12 01:06:08 -08:00
Junio C Hamano
637afcf4e0 Merge branch 'tr/http-updates'
* tr/http-updates:
  Remove http.authAny
  Allow curl to rewind the RPC read buffer
  Add an option for using any HTTP authentication scheme, not only basic
  http: maintain curl sessions
2010-01-10 08:53:04 -08:00
Thiago Farina
bd757c1859 Use warning function instead of fprintf(stderr, "Warning: ...").
Signed-off-by: Thiago Farina <tfransosi@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-01-03 16:17:03 -08:00
Junio C Hamano
525ecd26c6 Remove http.authAny
Back when the feature to use different HTTP authentication methods was
originally written, it needed an extra HTTP request for everything when
the feature was in effect, because we didn't reuse curl sessions.

However, b8ac923 (Add an option for using any HTTP authentication scheme,
not only basic, 2009-11-27) builds on top of an updated codebase that does
reuse curl sessions; there is no need to manually avoid the extra overhead
by making this configurable anymore.

Acked-by: Martin Storsjo <martin@martin.st>
Acked-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-12-29 12:07:58 -08:00
Martin Storsjö
b8ac923010 Add an option for using any HTTP authentication scheme, not only basic
This adds the configuration option http.authAny (overridable with
the environment variable GIT_HTTP_AUTH_ANY), for instructing curl
to allow any HTTP authentication scheme, not only basic (which
sends the password in plaintext).

When this is enabled, curl has to do double requests most of the time,
in order to discover which HTTP authentication method to use, which
lowers the performance slightly. Therefore this isn't enabled by default.

One example of another authentication scheme to use is digest, which
doesn't send the password in plaintext, but uses a challenge-response
mechanism instead. Using digest authentication in practice requires
at least curl 7.18.1, due to bugs in the digest handling in earlier
versions of curl.

Signed-off-by: Martin Storsjö <martin@martin.st>
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-27 22:46:33 -08:00
Tay Ray Chuan
ad75ebe5b3 http: maintain curl sessions
Allow curl sessions to be kept alive (ie. not ended with
curl_easy_cleanup()) even after the request is completed, the number of
which is determined by the configuration setting http.minSessions.

Add a count for curl sessions, and update it, across slots, when
starting and ending curl sessions.

Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-27 22:46:05 -08:00
Shawn O. Pearce
de1a2fdd38 Smart push over HTTP: client side
The git-remote-curl backend detects if the remote server supports
the git-receive-pack service, and if so, runs git-send-pack in a
pipe to dump the command and pack data as a single POST request.

The advertisements from the server that were obtained during the
discovery are passed into git-send-pack before the POST request
starts.  This permits git-send-pack to operate largely unmodified.

For smaller packs (those under 1 MiB) a HTTP/1.0 POST with a
Content-Length is used, permitting interaction with any server.
The 1 MiB limit is arbitrary, but is sufficent to fit most deltas
created by human authors against text sources with the occasional
small binary file (e.g. few KiB icon image).  The configuration
option http.postBuffer can be used to increase (or shink) this
buffer if the default is not sufficient.

For larger packs which cannot be spooled entirely into the helper's
memory space (due to http.postBuffer being too small), the POST
request requires HTTP/1.1 and sets "Transfer-Encoding: chunked".
This permits the client to upload an unknown amount of data in one
HTTP transaction without needing to pregenerate the entire pack
file locally.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04 17:58:15 -08:00
Junio C Hamano
2b621c1a3a Merge branch 'maint'
* maint:
  http.c: avoid freeing an uninitialized pointer
2009-09-14 14:48:27 -07:00
Junio C Hamano
b2025146d0 http.c: avoid freeing an uninitialized pointer
An earlier 59b8d38 (http.c: remove verification of remote packs) left
the variable "url" uninitialized; "goto cleanup" codepath can free it
which is not very nice.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-09-14 14:48:15 -07:00
Junio C Hamano
5b590d783a Merge branch 'maint'
* maint:
  GIT 1.6.4.3
  svn: properly escape arguments for authors-prog
  http.c: remove verification of remote packs
  grep: accept relative paths outside current working directory
  grep: fix exit status if external_grep() punts

Conflicts:
	GIT-VERSION-GEN
	RelNotes
2009-09-13 01:30:53 -07:00
Tay Ray Chuan
59b8d38f6e http.c: remove verification of remote packs
Make http.c::fetch_pack_index() no longer check for the remote pack
with a HEAD request before fetching the corresponding pack index file.

Not only does sending a HEAD request before we do a GET incur a
performance penalty, it does not offer any significant error-
prevention advantages (pack fetching in the *_http_pack_request()
methods is capable of handling any errors on its own).

This addresses an issue raised elsewhere:

  http://code.google.com/p/msysgit/issues/detail?id=323
  http://support.github.com/discussions/repos/957-cant-clone-over-http-or-git

Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-09-11 01:45:36 -07:00
Junio C Hamano
42fa6df99f Merge branch 'maint'
* maint:
  http.c: set slot callback members to NULL when releasing object
2009-08-28 19:37:57 -07:00
Junio C Hamano
48ae73b114 Merge branch 'rc/maint-http-fix' into maint
* rc/maint-http-fix:
  http.c: don't assume that urls don't end with slash
2009-08-28 19:34:16 -07:00
Tay Ray Chuan
4b9fa0e359 http.c: set slot callback members to NULL when releasing object
Set the members callback_func and callback_data of freq->slot to NULL
when releasing a http_object_request. release_active_slot() is also
invoked on the slot to remove the curl handle associated with the slot
from the multi stack (CURLM *curlm in http.c).

These prevent the callback function and data from being used in http
methods (like http.c::finish_active_slot()) after a
http_object_request has been free'd.

Noticed by Ali Polatel, who later tested this patch to verify that it
fixes the problem he saw; Dscho helped to identify the problem spot.

Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-08-28 19:24:43 -07:00
Junio C Hamano
9ebfda109e Merge branch 'rc/maint-http-fix'
* rc/maint-http-fix:
  http.c: don't assume that urls don't end with slash
2009-08-18 23:33:16 -07:00
Tay Ray Chuan
800324c3ad http.c: don't assume that urls don't end with slash
Make append_remote_object_url() (and by implication,
get_remote_object_url) use end_url_with_slash() to ensure that the url
ends with a slash.

Previously, they assumed that the url did not end with a slash and
as a result appended a slash, sometimes errorneously.

This fixes an issue introduced in 5424bc5 ("http*: add helper methods
for fetching objects (loose)"), where the append_remote_object_url()
implementation in http-push.c, which assumed that urls end with a
slash, was replaced by another one in http.c, which assumed urls did
not end with a slash.

The above issue was raised by Thomas Schlichter:

  http://marc.info/?l=git&m=125043105231327

Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Tested-by: Thomas Schlichter <thomas.schlichter@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-08-18 13:59:44 -07:00
Jeff Lasslett
0c4f21e452 Check return value of ftruncate call in http.c
In new_http_object_request(), check ftruncate() call return value and
handle possible errors.

Signed-off-by: Jeff Lasslett <jeff.lasslett@gmail.com>
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-08-10 13:26:18 -07:00
Tay Ray Chuan
bb99190e27 http.c: replace usage of temporary variable for urls
Use preq->url in new_http_pack_request and freq->url in
new_http_object_request when calling curl_setopt(CURLOPT_URL), instead
of using an intermediate variable, 'url'.

Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-08-10 13:26:11 -07:00
Tay Ray Chuan
5ae9ebfd58 http.c: free preq when aborting
Free preq in new_http_pack_request when aborting. preq was allocated
before jumping to the 'abort' label so this is safe.

Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-08-10 13:25:25 -07:00
Junio C Hamano
c535d767f7 Merge branch 'ml/http'
* ml/http:
  http.c: add http.sslCertPasswordProtected option
  http.c: prompt for SSL client certificate password

Conflicts:
	http.c
2009-07-09 01:00:36 -07:00
Mark Lodato
754ae192a4 http.c: add http.sslCertPasswordProtected option
Add a configuration option, http.sslCertPasswordProtected, and associated
environment variable, GIT_SSL_CERT_PASSWORD_PROTECTED, to enable SSL client
certificate password prompt from within git.  If this option is false and
if the environment variable does not exist, git falls back to OpenSSL's
prompts (as in earlier versions of git).

The environment variable may only be used to enable, not to disable
git's password prompt.  This behavior mimics GIT_NO_VERIFY; the mere
existence of the variable is all that is checked.

Signed-off-by: Mark Lodato <lodatom@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-18 10:51:29 -07:00
Mark Lodato
30dd916348 http.c: prompt for SSL client certificate password
If an SSL client certificate is enabled (via http.sslcert or
GIT_SSL_CERT), prompt for the certificate password rather than
defaulting to OpenSSL's password prompt.  This causes the prompt to only
appear once each run.  Previously, OpenSSL prompted the user *many*
times, causing git to be unusable over HTTPS with client-side
certificates.

Note that the password is stored in memory in the clear while the
program is running.  This may be a security problem if git crashes and
core dumps.

The user is always prompted, even if the certificate is not encrypted.
This should be fine; unencrypted certificates are rare and a security
risk anyway.

Signed-off-by: Mark Lodato <lodatom@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-18 10:45:05 -07:00
Junio C Hamano
da4e4a65a2 Merge branch 'maint'
* maint:
  http.c: fix compiling with libcurl 7.9.2
  import-tars: support symlinks
  pull, rebase: simplify to use die()
2009-06-18 10:39:17 -07:00
Mark Lodato
ef52aafa0f http.c: fix compiling with libcurl 7.9.2
Change the minimimum required libcurl version for the http.sslKey option
to 7.9.3.  Previously, preprocessor macros checked for >= 7.9.2, which
is incorrect because CURLOPT_SSLKEY was introduced in 7.9.3.  This now
allows git to compile with libcurl 7.9.2.

Signed-off-by: Mark Lodato <lodatom@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-18 10:10:30 -07:00
Tay Ray Chuan
5424bc557f http*: add helper methods for fetching objects (loose)
The code handling the fetching of loose objects in http-push.c and
http-walker.c have been refactored into new methods and a new struct
(object_http_request) in http.c. They are not meant to be invoked
elsewhere.

The new methods in http.c are
 - new_http_object_request
 - process_http_object_request
 - finish_http_object_request
 - abort_http_object_request
 - release_http_object_request

and the new struct is http_object_request.

RANGER_HEADER_SIZE and no_pragma_header is no longer made available
outside of http.c, since after the above changes, there are no other
instances of usage outside of http.c.

Remove members of the transfer_request struct in http-push.c and
http-walker.c, including filename, real_sha1 and zret, as they are used
no longer used.

Move the methods append_remote_object_url() and get_remote_object_url()
from http-push.c to http.c. Additionally, get_remote_object_url() is no
longer defined only when USE_CURL_MULTI is defined, since
non-USE_CURL_MULTI code in http.c uses it (namely, in
new_http_object_request()).

Refactor code from http-push.c::start_fetch_loose() and
http-walker.c::start_object_fetch_request() that deals with the details
of coming up with the filename to store the retrieved object, resuming
a previously aborted request, and making a new curl request, into a new
function, new_http_object_request().

Refactor code from http-walker.c::process_object_request() into the
function, process_http_object_request().

Refactor code from http-push.c::finish_request() and
http-walker.c::finish_object_request() into a new function,
finish_http_object_request(). It returns the result of the
move_temp_to_file() invocation.

Add a function, release_http_object_request(), which cleans up object
request data. http-push.c and http-walker.c invoke this function
separately; http-push.c::release_request() and
http-walker.c::release_object_request() do not invoke this function.

Add a function, abort_http_object_request(), which unlink()s the object
file and invokes release_http_object_request(). Update
http-walker.c::abort_object_request() to use this.

Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 11:03:11 -07:00
Tay Ray Chuan
2264dfa5c4 http*: add helper methods for fetching packs
The code handling the fetching of packs in http-push.c and
http-walker.c have been refactored into new methods and a new struct
(http_pack_request) in http.c. They are not meant to be invoked
elsewhere.

The new methods in http.c are
 - new_http_pack_request
 - finish_http_pack_request
 - release_http_pack_request

and the new struct is http_pack_request.

Add a function, new_http_pack_request(), that deals with the details of
coming up with the filename to store the retrieved packfile, resuming a
previously aborted request, and making a new curl request. Update
http-push.c::start_fetch_packed() and http-walker.c::fetch_pack() to
use this.

Add a function, finish_http_pack_request(), that deals with renaming
the pack, advancing the pack list, and installing the pack. Update
http-push.c::finish_request() and http-walker.c::fetch_pack to use
this.

Update release_request() in http-push.c and http-walker.c to invoke
release_http_pack_request() to clean up pack request helper data.

The local_stream member of the transfer_request struct in http-push.c
has been removed, as the packfile pointer will be managed in the struct
http_pack_request.

Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 11:03:11 -07:00
Tay Ray Chuan
39dc52cf4f http: use new http API in fetch_index()
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 11:03:11 -07:00
Tay Ray Chuan
b8caac2b8a http*: add http_get_info_packs
http-push.c and http-walker.c no longer have to use fetch_index or
setup_index; they simply need to use http_get_info_packs, a new http
method, in their fetch_indices implementations.

Move fetch_index() and rename to fetch_pack_index() in http.c; this
method is not meant to be used outside of http.c. It invokes
end_url_with_slash with base_url; apart from that change, the code is
identical.

Move setup_index() and rename to fetch_and_setup_pack_index() in
http.c; this method is not meant to be used outside of http.c.

Do not immediately set ret to 0 in http-walker.c::fetch_indices();
instead do it in the HTTP_MISSING_TARGET case, to make it clear that
the HTTP_OK and HTTP_MISSING_TARGET cases both return 0.

Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 11:03:11 -07:00
Mike Hommey
0d5896e1cc http.c::http_fetch_ref(): use the new http API
The error message ("Unable to start request") has been removed, since
the http API already prints it.

Signed-off-by: Mike Hommey <mh@glandium.org>
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 11:03:10 -07:00
Mike Hommey
e929cd20bb http.c: new functions for the http API
The new functions added are:
 - http_request() (internal function)
 - http_get_strbuf()
 - http_get_file()
 - http_error()

http_get_strbuf and http_get_file allow respectively to retrieve contents of
an URL to a strbuf or an opened file handle.

http_error prints out an error message containing the URL and the curl error
(in curl_errorstr).

Signed-off-by: Mike Hommey <mh@glandium.org>
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 10:56:27 -07:00
Tay Ray Chuan
5ace994f33 http: create function end_url_with_slash
The logic to append a slash to the url if necessary in quote_ref_url
(added in 113106e "http.c: use strbuf API in quote_ref_url") has been
moved to a new function, end_url_with_slash.

The method takes a strbuf, the URL, and the path to be appended to the
URL. It first adds the URL to the strbuf. It then appends a slash
if the URL does not end with a slash.

The check on ref in quote_ref_url for a slash at the beginning has been
removed as a result of using end_url_with_slash. This check is not
needed, because slashes will be quoted anyway.

Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 10:56:27 -07:00
Tay Ray Chuan
e917674597 http*: move common variables and macros to http.[ch]
Move RANGE_HEADER_SIZE to http.h.

Create no_pragma_header, the curl header list containing the header
"Pragma:" in http.[ch]. It is allocated in http_init, and freed in
http_cleanup. This replaces the no_pragma_header in http-push.c, and
the no_pragma_header member in walker_data in http-walker.c.

Create http_is_verbose. It is to be used by methods in http.c, and is
modified at the entry points of http.c's users, namely http-push.c
(when parsing options) and http-walker.c (in get_http_walker).

Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 10:56:27 -07:00
Martin Storsjö
3944ba0cb0 Allow curl to rewind the read buffers
When using multi-pass authentication methods, the curl library may
need to rewind the read buffers (depending on how much already has
been fed to the server) used for providing data to HTTP PUT, POST or
PROPFIND, and in order to allow the library to do so, we need to tell
it how by providing either an ioctl callback or a seek callback.

This patch adds an ioctl callback, which should be usable on older
curl versions (since 7.12.3) than the seek callback (introduced in
curl 7.18.0).

Some HTTP servers (such as Apache) give an 401 error reply immediately
after receiving the headers (so no data has been read from the read
buffers, and thus no rewinding is needed), but other servers (such
as Lighttpd) only replies after the whole request has been sent and
all data has been read from the read buffers, making rewinding necessary.

Signed-off-by: Martin Storsjo <martin@martin.st>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-04-02 13:04:07 -07:00
Junio C Hamano
750d930500 http.c: CURLOPT_NETRC_OPTIONAL is not available in ancient versions of cURL
Besides, we have already called easy_setopt with the option before coming
to this function if it was available, so there is no need to repeat it
here.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-12 22:42:19 -07:00
Junio C Hamano
c33976cbc6 http authentication via prompts
Curl is designed not to ask for password when only username is given in
the URL, but has a way for application to feed a (username, password) pair
to it.  With this patch, you do not have to keep your password in
plaintext in your $HOME/.netrc file when talking with a password protected
URL with http://<username>@<host>/path/to/repository.git/ syntax.

The code handles only the http-walker side, not the push side.  At least,
not yet.  But interested parties can add support for it.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-10 22:35:31 -07:00
Junio C Hamano
7059cd99fc http_init(): Fix config file parsing
We honor the command line options, environment variables, variables in
repository configuration file, variables in user's global configuration
file, variables in the system configuration file, and then finally use
built-in default.  To implement this semantics, the code should:

 - start from built-in default values;

 - call git_config() with the configuration parser callback, which
   implements "later definition overrides earlier ones" logic
   (git_config() reads the system's, user's and then repository's
   configuration file in this order);

 - override the result from the above with environment variables if set;

 - override the result from the above with command line options.

The initialization code http_init() for http transfer got this wrong, and
implemented a "first one wins, ignoring the later ones" in http_options(),
to compensate this mistake, read environment variables before calling
git_config().  This is all wrong.

As a second class citizen, the http codepath hasn't been audited as
closely as other parts of the system, but we should try to bring sanity to
it, before inviting contributors to improve on it.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-10 22:31:29 -07:00
Junio C Hamano
4251ccbd80 http.c: style cleanups
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-09 18:47:29 -07:00
Tay Ray Chuan
113106e06c http.c: use strbuf API in quote_ref_url
In addition, ''quote_ref_url'' inserts a slash between the base URL and
remote ref path only if needed. Previously, this insertion wasn't
contingent on the lack of a separating slash.

Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Acked-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-07 20:52:25 -08:00
Junio C Hamano
3a59bb22db Merge branch 'maint'
* maint:
  GIT 1.6.0.5
  "git diff <tree>{3,}": do not reverse order of arguments
  tag: delete TAG_EDITMSG only on successful tag
  gitweb: Make project specific override for 'grep' feature work
  http.c: use 'git_config_string' to get 'curl_http_proxy'
  fetch-pack: Avoid memcpy() with src==dst
2008-12-07 15:13:02 -08:00
Miklos Vajna
e4a80ecf40 http.c: use 'git_config_string' to get 'curl_http_proxy'
Signed-off-by: Miklos Vajna <vmiklos@frugalware.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-12-07 02:41:55 -08:00
Junio C Hamano
fb0863a528 Merge branch 'mh/maint-honor-no-ssl-verify'
* mh/maint-honor-no-ssl-verify:
  Don't verify host name in SSL certs when GIT_SSL_NO_VERIFY is set
2008-09-16 00:46:36 -07:00
Dotan Barak
e8eec71d6e Use xmalloc() and friends to catch allocation failures
Some places use the standard malloc/strdup without checking if the
allocation was successful; they should use xmalloc/xstrdup that
check the memory allocation result.

Signed-off-by: Dotan Barak <dotanba@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-09-09 16:28:05 -07:00
Junio C Hamano
a5ccc5979d Don't verify host name in SSL certs when GIT_SSL_NO_VERIFY is set
Originally from Mike Hommey; earlier we were disabling SSL_VERIFYPEER
but SSL_VERIFYHOST was in effect even when the user asked not to with
the environment variable.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-09-07 09:57:44 -07:00
Brian Hetro
7ef8ea7035 http.c: Use 'git_config_string' to clean up SSL config.
Signed-off-by: Brian Hetro <whee@smaertness.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-07-05 17:42:46 -07:00
Junio C Hamano
f444e5286e Work around gcc warnings from curl headers
After master.k.org upgrade, I started seeing these warning messages:

    transport.c: In function 'get_refs_via_curl':
    transport.c:458: error: call to '_curl_easy_setopt_err_write_callback' declared with attribute warning: curl_easy_setopt expects a curl_write_callback argument for this option

It appears that the curl header wants to enforce the function signature
for callback function given to curl_easy_setopt() to be compatible with
that of (*curl_write_callback) or fwrite.  This patch seems to work the
issue around.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-07-04 00:37:40 -07:00
Mike Hommey
7c1a9e7901 Don't allocate too much memory in quote_ref_url
In c13b263, http_fetch_ref got "refs/" included in the ref passed to it,
which, incidentally, makes the allocation in quote_ref_url too big, now.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-06-14 12:53:09 -07:00
Johannes Schindelin
ef90d6d420 Provide git_config with a callback-data parameter
git_config() only had a function parameter, but no callback data
parameter.  This assumes that all callback functions only modify
global variables.

With this patch, every callback gets a void * parameter, and it is hoped
that this will help the libification effort.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-05-14 12:34:44 -07:00
Daniel Barkalow
be885d96fe Make ls-remote http://... list HEAD, like for git://...
This makes a struct ref able to represent a symref, and makes http.c
able to recognize one, and makes transport.c look for "HEAD" as a ref
in the list, and makes it dereference symrefs for the resulting ref,
if any.

Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-04-26 17:36:18 -07:00
Daniel Barkalow
c13b2633f4 Make walker.fetch_ref() take a struct ref.
This simplifies a few things, makes a few things slightly more
complicated, but, more importantly, allows that, when struct ref can
represent a symref, http_fetch_ref() can return one.

Incidentally makes the string that http_fetch_ref() gets include "refs/"
(if appropriate), because that's how the name field of struct ref works.
As far as I can tell, the usage in walker:interpret_target() wouldn't have
worked previously, if it ever would have been used, which it wouldn't
(since the fetch process uses the hash instead of the name of the ref
there).

Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-04-26 17:36:17 -07:00
Junio C Hamano
27b4070e40 Merge branch 'maint'
* maint:
  Fix 'git remote show' regression on empty repository in 1.5.4
  Fix incorrect wording in git-merge.txt.
  git-merge.sh: better handling of combined --squash,--no-ff,--no-commit options
  Fix random crashes in http_cleanup()
2008-03-04 00:34:39 -08:00
Mike Hommey
f23d1f7627 Fix random crashes in http_cleanup()
For some reason, http_cleanup was running all active slots, which could
lead in situations where a freed slot would be accessed in
fill_active_slots. OTOH, we are cleaning up, which means the caller
doesn't care about pending requests. Just forget about them instead
or running them.

Signed-off-by: Mike Hommey <mh@glandium.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-03-03 13:36:44 -08:00
Mike Hommey
9fc6440d78 Set proxy override with http_init()
In transport.c, proxy setting (the one from the remote conf) was set through
curl_easy_setopt() call, while http.c already does the same with the
http.proxy setting. We now just use this infrastructure instead, and make
http_init() now take the struct remote as argument so that it can take the
http_proxy setting from there, and any other property that would be added
later.

At the same time, we make get_http_walker() take a struct remote argument
too, and pass it to http_init(), which makes remote defined proxy be used
for more than get_refs_via_curl().

We leave out http-fetch and http-push, which don't use remotes for the
moment, purposefully.

Signed-off-by: Mike Hommey <mh@glandium.org>
Acked-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-02-27 15:37:57 -08:00
Junio C Hamano
111dd25f3c http.c: guard config parser from value=NULL
http.sslcert and friends expect a string value

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-02-11 13:11:37 -08:00
Mike Hommey
d7e92806cd Move fetch_ref from http-push.c and http-walker.c to http.c
Make the necessary changes to be ok with their difference, and rename the
function http_fetch_ref.

Signed-off-by: Mike Hommey <mh@glandium.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-12-14 21:31:59 -08:00
Mike Hommey
028c297638 Use strbuf in http code
Also, replace whitespaces with tabs in some places

Signed-off-by: Mike Hommey <mh@glandium.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-12-14 21:31:59 -08:00
Mike Hommey
02dc534fc6 Remove a CURLOPT_HTTPHEADER (un)setting
Setting CURLOPT_HTTPHEADER doesn't add HTTP headers, but replaces whatever
set of headers was configured before, so setting to NULL doesn't have any
magic meaning, and is pretty much useless when setting to another list
right after.

Signed-off-by: Mike Hommey <mh@glandium.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-12-14 21:31:59 -08:00
Mike Hommey
cc3530e8f3 Cleanup variables in http.[ch]
Quite some variables defined as extern in http.h are only used in http.c,
and some others, only defined in http.c, were not static.

Signed-off-by: Mike Hommey <mh@glandium.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-12-09 12:18:42 -08:00
Sam Vilain
9c5665aa3c Allow HTTP proxy to be overridden in config
The http_proxy / HTTPS_PROXY variables used by curl to control
proxying may not be suitable for git.  Allow the user to override them
in the configuration file.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-12-03 22:11:53 -08:00
Shawn O. Pearce
3278cd0a39 Properly cleanup in http_cleanup so builtin-fetch does not segfault
Junio and I both noticed that the new builtin-fetch was segfaulting
immediately on http/https/ftp style URLs (those that went through
libcurl and the commit walker).  Although the builtin-fetch changes
in this area were really just minor refactorings there was one major
change made: we invoked http_init(), http_cleanup() then http_init()
again in the same process.

When we call curl_easy_cleanup() on each active_request_slot we
are telling libcurl we did not want that buffer to be used again.
Unfortunately we did not also deallocate the active_request_slot
itself nor did we NULL out active_queue_head.  This lead us to
attempt to reuse these cleaned up libcurl handles when we later tried
to invoke http_init() a second time to reactivate the curl library.
The next file get operation then immediately segfaulted on most
versions of libcurl.

Properly freeing our own buffers and clearing the list causes us to
reinitialize the curl buffers again if/when we need to use libcurl
from within this same process.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-09-19 03:22:31 -07:00
Daniel Barkalow
fc57b6aaa5 Make function to refill http queue a callback
This eliminates the last function provided by the code using http.h as
a global symbol, so it should be possible to have multiple programs
using http.h in the same executable, and it also adds an argument to
that callback, so that info can be passed into the callback without
being global.

Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-09-19 03:22:30 -07:00
Daniel Barkalow
45c1741235 Refactor http.h USE_CURL_MULTI fill_active_slots().
This removes all of the boilerplate and http-internal stuff from
fill_active_slots() and makes it easy to turn into a callback.

Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-09-19 03:22:29 -07:00