2014-09-14 09:40:45 +02:00
|
|
|
#include "git-compat-util.h"
|
2005-11-18 20:02:58 +01:00
|
|
|
#include "http.h"
|
2009-06-06 10:44:01 +02:00
|
|
|
#include "pack.h"
|
2009-10-31 01:47:41 +01:00
|
|
|
#include "sideband.h"
|
2010-04-19 16:23:09 +02:00
|
|
|
#include "run-command.h"
|
2010-11-14 02:51:15 +01:00
|
|
|
#include "url.h"
|
2013-08-05 22:20:36 +02:00
|
|
|
#include "urlmatch.h"
|
http: use credential API to get passwords
This patch converts the http code to use the new credential
API, both for http authentication as well as for getting
certificate passwords.
Most of the code change is simply variable naming (the
passwords are now contained inside the credential struct)
or deletion of obsolete code (the credential code handles
URL parsing and prompting for us).
The behavior should be the same, with one exception: the
credential code will prompt with a description based on the
credential components. Therefore, the old prompt of:
Username for 'example.com':
Password for 'example.com':
now looks like:
Username for 'https://example.com/repo.git':
Password for 'https://user@example.com/repo.git':
Note that we include more information in each line,
specifically:
1. We now include the protocol. While more noisy, this is
an important part of knowing what you are accessing
(especially if you care about http vs https).
2. We include the username in the password prompt. This is
not a big deal when you have just been prompted for it,
but the username may also come from the remote's URL
(and after future patches, from configuration or
credential helpers). In that case, it's a nice
reminder of the user for which you're giving the
password.
3. We include the path component of the URL. In many
cases, the user won't care about this and it's simply
noise (i.e., they'll use the same credential for a
whole site). However, that is part of a larger
question, which is whether path components should be
part of credential context, both for prompting and for
lookup by storage helpers. That issue will be addressed
as a whole in a future patch.
Similarly, for unlocking certificates, we used to say:
Certificate Password for 'example.com':
and we now say:
Password for 'cert:///path/to/certificate':
Showing the path to the client certificate makes more sense,
as that is what you are unlocking, not "example.com".
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-12-10 11:31:21 +01:00
|
|
|
#include "credential.h"
|
2012-06-02 21:03:08 +02:00
|
|
|
#include "version.h"
|
2013-02-20 21:02:45 +01:00
|
|
|
#include "pkt-line.h"
|
2015-02-26 04:04:16 +01:00
|
|
|
#include "gettext.h"
|
http: limit redirection to protocol-whitelist
Previously, libcurl would follow redirection to any protocol
it was compiled for support with. This is desirable to allow
redirection from HTTP to HTTPS. However, it would even
successfully allow redirection from HTTP to SFTP, a protocol
that git does not otherwise support at all. Furthermore
git's new protocol-whitelisting could be bypassed by
following a redirect within the remote helper, as it was
only enforced at transport selection time.
This patch limits redirects within libcurl to HTTP, HTTPS,
FTP and FTPS. If there is a protocol-whitelist present, this
list is limited to those also allowed by the whitelist. As
redirection happens from within libcurl, it is impossible
for an HTTP redirect to a protocol implemented within
another remote helper.
When the curl version git was compiled with is too old to
support restrictions on protocol redirection, we warn the
user if GIT_ALLOW_PROTOCOL restrictions were requested. This
is a little inaccurate, as even without that variable in the
environment, we would still restrict SFTP, etc, and we do
not warn in that case. But anything else means we would
literally warn every time git accesses an http remote.
This commit includes a test, but it is not as robust as we
would hope. It redirects an http request to ftp, and checks
that curl complained about the protocol, which means that we
are relying on curl's specific error message to know what
happened. Ideally we would redirect to a working ftp server
and confirm that we can clone without protocol restrictions,
and not with them. But we do not have a portable way of
providing an ftp server, nor any other protocol that curl
supports (https is the closest, but we would have to deal
with certificates).
[jk: added test and version warning]
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-09-23 00:06:04 +02:00
|
|
|
#include "transport.h"
|
2005-11-18 20:02:58 +01:00
|
|
|
|
2016-05-23 15:44:02 +02:00
|
|
|
static struct trace_key trace_curl = TRACE_KEY_INIT(CURL);
|
2016-02-03 05:09:14 +01:00
|
|
|
#if LIBCURL_VERSION_NUM >= 0x070a08
|
|
|
|
long int git_curl_ipresolve = CURL_IPRESOLVE_WHATEVER;
|
|
|
|
#else
|
|
|
|
long int git_curl_ipresolve;
|
|
|
|
#endif
|
2009-03-10 02:47:29 +01:00
|
|
|
int active_requests;
|
2009-06-06 10:43:41 +02:00
|
|
|
int http_is_verbose;
|
2009-10-31 01:47:41 +01:00
|
|
|
size_t http_post_buffer = 16 * LARGE_PACKET_MAX;
|
2005-11-18 20:02:58 +01:00
|
|
|
|
2009-11-27 16:43:08 +01:00
|
|
|
#if LIBCURL_VERSION_NUM >= 0x070a06
|
|
|
|
#define LIBCURL_CAN_HANDLE_AUTH_ANY
|
|
|
|
#endif
|
|
|
|
|
2009-11-27 16:42:26 +01:00
|
|
|
static int min_curl_sessions = 1;
|
|
|
|
static int curl_session_count;
|
2005-11-18 20:02:58 +01:00
|
|
|
#ifdef USE_CURL_MULTI
|
2007-12-09 18:04:57 +01:00
|
|
|
static int max_requests = -1;
|
|
|
|
static CURLM *curlm;
|
2005-11-18 20:02:58 +01:00
|
|
|
#endif
|
|
|
|
#ifndef NO_CURL_EASY_DUPHANDLE
|
2007-12-09 18:04:57 +01:00
|
|
|
static CURL *curl_default;
|
2005-11-18 20:02:58 +01:00
|
|
|
#endif
|
http*: add helper methods for fetching objects (loose)
The code handling the fetching of loose objects in http-push.c and
http-walker.c have been refactored into new methods and a new struct
(object_http_request) in http.c. They are not meant to be invoked
elsewhere.
The new methods in http.c are
- new_http_object_request
- process_http_object_request
- finish_http_object_request
- abort_http_object_request
- release_http_object_request
and the new struct is http_object_request.
RANGER_HEADER_SIZE and no_pragma_header is no longer made available
outside of http.c, since after the above changes, there are no other
instances of usage outside of http.c.
Remove members of the transfer_request struct in http-push.c and
http-walker.c, including filename, real_sha1 and zret, as they are used
no longer used.
Move the methods append_remote_object_url() and get_remote_object_url()
from http-push.c to http.c. Additionally, get_remote_object_url() is no
longer defined only when USE_CURL_MULTI is defined, since
non-USE_CURL_MULTI code in http.c uses it (namely, in
new_http_object_request()).
Refactor code from http-push.c::start_fetch_loose() and
http-walker.c::start_object_fetch_request() that deals with the details
of coming up with the filename to store the retrieved object, resuming
a previously aborted request, and making a new curl request, into a new
function, new_http_object_request().
Refactor code from http-walker.c::process_object_request() into the
function, process_http_object_request().
Refactor code from http-push.c::finish_request() and
http-walker.c::finish_object_request() into a new function,
finish_http_object_request(). It returns the result of the
move_temp_to_file() invocation.
Add a function, release_http_object_request(), which cleans up object
request data. http-push.c and http-walker.c invoke this function
separately; http-push.c::release_request() and
http-walker.c::release_object_request() do not invoke this function.
Add a function, abort_http_object_request(), which unlink()s the object
file and invokes release_http_object_request(). Update
http-walker.c::abort_object_request() to use this.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 10:44:02 +02:00
|
|
|
|
|
|
|
#define PREV_BUF_SIZE 4096
|
|
|
|
|
2005-11-18 20:02:58 +01:00
|
|
|
char curl_errorstr[CURL_ERROR_SIZE];
|
|
|
|
|
2007-12-09 18:04:57 +01:00
|
|
|
static int curl_ssl_verify = -1;
|
2013-04-07 21:10:39 +02:00
|
|
|
static int curl_ssl_try;
|
2009-03-10 02:47:29 +01:00
|
|
|
static const char *ssl_cert;
|
2015-05-08 15:22:15 +02:00
|
|
|
static const char *ssl_cipherlist;
|
2015-08-14 21:37:43 +02:00
|
|
|
static const char *ssl_version;
|
|
|
|
static struct {
|
|
|
|
const char *name;
|
|
|
|
long ssl_version;
|
|
|
|
} sslversions[] = {
|
|
|
|
{ "sslv2", CURL_SSLVERSION_SSLv2 },
|
|
|
|
{ "sslv3", CURL_SSLVERSION_SSLv3 },
|
|
|
|
{ "tlsv1", CURL_SSLVERSION_TLSv1 },
|
|
|
|
#if LIBCURL_VERSION_NUM >= 0x072200
|
|
|
|
{ "tlsv1.0", CURL_SSLVERSION_TLSv1_0 },
|
|
|
|
{ "tlsv1.1", CURL_SSLVERSION_TLSv1_1 },
|
|
|
|
{ "tlsv1.2", CURL_SSLVERSION_TLSv1_2 },
|
|
|
|
#endif
|
|
|
|
};
|
2009-06-15 04:39:00 +02:00
|
|
|
#if LIBCURL_VERSION_NUM >= 0x070903
|
2009-03-10 02:47:29 +01:00
|
|
|
static const char *ssl_key;
|
2005-11-18 20:02:58 +01:00
|
|
|
#endif
|
|
|
|
#if LIBCURL_VERSION_NUM >= 0x070908
|
2009-03-10 02:47:29 +01:00
|
|
|
static const char *ssl_capath;
|
2005-11-18 20:02:58 +01:00
|
|
|
#endif
|
2016-02-15 15:04:22 +01:00
|
|
|
#if LIBCURL_VERSION_NUM >= 0x072c00
|
|
|
|
static const char *ssl_pinnedkey;
|
|
|
|
#endif
|
2009-03-10 02:47:29 +01:00
|
|
|
static const char *ssl_cainfo;
|
2007-12-09 18:04:57 +01:00
|
|
|
static long curl_low_speed_limit = -1;
|
|
|
|
static long curl_low_speed_time = -1;
|
2009-03-10 02:47:29 +01:00
|
|
|
static int curl_ftp_no_epsv;
|
|
|
|
static const char *curl_http_proxy;
|
2016-02-29 16:16:57 +01:00
|
|
|
static const char *curl_no_proxy;
|
2016-01-26 14:02:47 +01:00
|
|
|
static const char *http_proxy_authmethod;
|
|
|
|
static struct {
|
|
|
|
const char *name;
|
|
|
|
long curlauth_param;
|
|
|
|
} proxy_authmethods[] = {
|
|
|
|
{ "basic", CURLAUTH_BASIC },
|
|
|
|
{ "digest", CURLAUTH_DIGEST },
|
|
|
|
{ "negotiate", CURLAUTH_GSSNEGOTIATE },
|
|
|
|
{ "ntlm", CURLAUTH_NTLM },
|
|
|
|
#ifdef LIBCURL_CAN_HANDLE_AUTH_ANY
|
|
|
|
{ "anyauth", CURLAUTH_ANY },
|
|
|
|
#endif
|
|
|
|
/*
|
|
|
|
* CURLAUTH_DIGEST_IE has no corresponding command-line option in
|
|
|
|
* curl(1) and is not included in CURLAUTH_ANY, so we leave it out
|
|
|
|
* here, too
|
|
|
|
*/
|
|
|
|
};
|
http: use credential API to handle proxy authentication
Currently, the only way to pass proxy credentials to curl is by including them
in the proxy URL. Usually, this means they will end up on disk unencrypted, one
way or another (by inclusion in ~/.gitconfig, shell profile or history). Since
proxy authentication often uses a domain user, credentials can be security
sensitive; therefore, a safer way of passing credentials is desirable.
If the configured proxy contains a username but not a password, query the
credential API for one. Also, make sure we approve/reject proxy credentials
properly.
For consistency reasons, add parsing of http_proxy/https_proxy/all_proxy
environment variables, which would otherwise be evaluated as a fallback by curl.
Without this, we would have different semantics for git configuration and
environment variables.
Helped-by: Junio C Hamano <gitster@pobox.com>
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Helped-by: Elia Pinto <gitter.spiros@gmail.com>
Signed-off-by: Knut Franke <k.franke@science-computing.de>
Signed-off-by: Elia Pinto <gitter.spiros@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-01-26 14:02:48 +01:00
|
|
|
static struct credential proxy_auth = CREDENTIAL_INIT;
|
|
|
|
static const char *curl_proxyuserpwd;
|
2011-06-02 22:31:25 +02:00
|
|
|
static const char *curl_cookie_file;
|
2013-07-24 00:40:17 +02:00
|
|
|
static int curl_save_cookies;
|
http: hoist credential request out of handle_curl_result
When we are handling a curl response code in http_request or
in the remote-curl RPC code, we use the handle_curl_result
helper to translate curl's response into an easy-to-use
code. When we see an HTTP 401, we do one of two things:
1. If we already had a filled-in credential, we mark it as
rejected, and then return HTTP_NOAUTH to indicate to
the caller that we failed.
2. If we didn't, then we ask for a new credential and tell
the caller HTTP_REAUTH to indicate that they may want
to try again.
Rejecting in the first case makes sense; it is the natural
result of the request we just made. However, prompting for
more credentials in the second step does not always make
sense. We do not know for sure that the caller is going to
make a second request, and nor are we sure that it will be
to the same URL. Logically, the prompt belongs not to the
request we just finished, but to the request we are (maybe)
about to make.
In practice, it is very hard to trigger any bad behavior.
Currently, if we make a second request, it will always be to
the same URL (even in the face of redirects, because curl
handles the redirects internally). And we almost always
retry on HTTP_REAUTH these days. The one exception is if we
are streaming a large RPC request to the server (e.g., a
pushed packfile), in which case we cannot restart. It's
extremely unlikely to see a 401 response at this stage,
though, as we would typically have seen it when we sent a
probe request, before streaming the data.
This patch drops the automatic prompt out of case 2, and
instead requires the caller to do it. This is a few extra
lines of code, and the bug it fixes is unlikely to come up
in practice. But it is conceptually cleaner, and paves the
way for better handling of credentials across redirects.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-09-28 10:31:45 +02:00
|
|
|
struct credential http_auth = CREDENTIAL_INIT;
|
2011-12-14 01:11:56 +01:00
|
|
|
static int http_proactive_auth;
|
2010-08-11 22:40:38 +02:00
|
|
|
static const char *user_agent;
|
2016-02-15 19:44:46 +01:00
|
|
|
static int curl_empty_auth;
|
2005-11-18 20:02:58 +01:00
|
|
|
|
2009-05-28 05:16:02 +02:00
|
|
|
#if LIBCURL_VERSION_NUM >= 0x071700
|
|
|
|
/* Use CURLOPT_KEYPASSWD as is */
|
|
|
|
#elif LIBCURL_VERSION_NUM >= 0x070903
|
|
|
|
#define CURLOPT_KEYPASSWD CURLOPT_SSLKEYPASSWD
|
|
|
|
#else
|
|
|
|
#define CURLOPT_KEYPASSWD CURLOPT_SSLCERTPASSWD
|
|
|
|
#endif
|
|
|
|
|
http: use credential API to get passwords
This patch converts the http code to use the new credential
API, both for http authentication as well as for getting
certificate passwords.
Most of the code change is simply variable naming (the
passwords are now contained inside the credential struct)
or deletion of obsolete code (the credential code handles
URL parsing and prompting for us).
The behavior should be the same, with one exception: the
credential code will prompt with a description based on the
credential components. Therefore, the old prompt of:
Username for 'example.com':
Password for 'example.com':
now looks like:
Username for 'https://example.com/repo.git':
Password for 'https://user@example.com/repo.git':
Note that we include more information in each line,
specifically:
1. We now include the protocol. While more noisy, this is
an important part of knowing what you are accessing
(especially if you care about http vs https).
2. We include the username in the password prompt. This is
not a big deal when you have just been prompted for it,
but the username may also come from the remote's URL
(and after future patches, from configuration or
credential helpers). In that case, it's a nice
reminder of the user for which you're giving the
password.
3. We include the path component of the URL. In many
cases, the user won't care about this and it's simply
noise (i.e., they'll use the same credential for a
whole site). However, that is part of a larger
question, which is whether path components should be
part of credential context, both for prompting and for
lookup by storage helpers. That issue will be addressed
as a whole in a future patch.
Similarly, for unlocking certificates, we used to say:
Certificate Password for 'example.com':
and we now say:
Password for 'cert:///path/to/certificate':
Showing the path to the client certificate makes more sense,
as that is what you are unlocking, not "example.com".
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-12-10 11:31:21 +01:00
|
|
|
static struct credential cert_auth = CREDENTIAL_INIT;
|
2009-05-28 05:16:02 +02:00
|
|
|
static int ssl_cert_password_required;
|
2015-01-08 01:29:20 +01:00
|
|
|
#ifdef LIBCURL_CAN_HANDLE_AUTH_ANY
|
|
|
|
static unsigned long http_auth_methods = CURLAUTH_ANY;
|
|
|
|
#endif
|
2009-05-28 05:16:02 +02:00
|
|
|
|
2007-12-09 18:04:57 +01:00
|
|
|
static struct curl_slist *pragma_header;
|
http*: add helper methods for fetching objects (loose)
The code handling the fetching of loose objects in http-push.c and
http-walker.c have been refactored into new methods and a new struct
(object_http_request) in http.c. They are not meant to be invoked
elsewhere.
The new methods in http.c are
- new_http_object_request
- process_http_object_request
- finish_http_object_request
- abort_http_object_request
- release_http_object_request
and the new struct is http_object_request.
RANGER_HEADER_SIZE and no_pragma_header is no longer made available
outside of http.c, since after the above changes, there are no other
instances of usage outside of http.c.
Remove members of the transfer_request struct in http-push.c and
http-walker.c, including filename, real_sha1 and zret, as they are used
no longer used.
Move the methods append_remote_object_url() and get_remote_object_url()
from http-push.c to http.c. Additionally, get_remote_object_url() is no
longer defined only when USE_CURL_MULTI is defined, since
non-USE_CURL_MULTI code in http.c uses it (namely, in
new_http_object_request()).
Refactor code from http-push.c::start_fetch_loose() and
http-walker.c::start_object_fetch_request() that deals with the details
of coming up with the filename to store the retrieved object, resuming
a previously aborted request, and making a new curl request, into a new
function, new_http_object_request().
Refactor code from http-walker.c::process_object_request() into the
function, process_http_object_request().
Refactor code from http-push.c::finish_request() and
http-walker.c::finish_object_request() into a new function,
finish_http_object_request(). It returns the result of the
move_temp_to_file() invocation.
Add a function, release_http_object_request(), which cleans up object
request data. http-push.c and http-walker.c invoke this function
separately; http-push.c::release_request() and
http-walker.c::release_object_request() do not invoke this function.
Add a function, abort_http_object_request(), which unlink()s the object
file and invokes release_http_object_request(). Update
http-walker.c::abort_object_request() to use this.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 10:44:02 +02:00
|
|
|
static struct curl_slist *no_pragma_header;
|
2016-04-27 14:20:37 +02:00
|
|
|
static struct curl_slist *extra_http_headers;
|
2009-06-06 10:43:41 +02:00
|
|
|
|
2009-03-10 02:47:29 +01:00
|
|
|
static struct active_request_slot *active_queue_head;
|
2005-11-18 20:02:58 +01:00
|
|
|
|
2015-01-28 13:04:37 +01:00
|
|
|
static char *cached_accept_language;
|
|
|
|
|
2011-05-03 17:47:27 +02:00
|
|
|
size_t fread_buffer(char *ptr, size_t eltsize, size_t nmemb, void *buffer_)
|
2005-11-18 20:02:58 +01:00
|
|
|
{
|
|
|
|
size_t size = eltsize * nmemb;
|
2008-07-04 09:37:40 +02:00
|
|
|
struct buffer *buffer = buffer_;
|
|
|
|
|
2007-12-09 20:30:59 +01:00
|
|
|
if (size > buffer->buf.len - buffer->posn)
|
|
|
|
size = buffer->buf.len - buffer->posn;
|
|
|
|
memcpy(ptr, buffer->buf.buf + buffer->posn, size);
|
2005-11-18 20:02:58 +01:00
|
|
|
buffer->posn += size;
|
2007-12-09 20:30:59 +01:00
|
|
|
|
2005-11-18 20:02:58 +01:00
|
|
|
return size;
|
|
|
|
}
|
|
|
|
|
2009-04-01 18:48:24 +02:00
|
|
|
#ifndef NO_CURL_IOCTL
|
|
|
|
curlioerr ioctl_buffer(CURL *handle, int cmd, void *clientp)
|
|
|
|
{
|
|
|
|
struct buffer *buffer = clientp;
|
|
|
|
|
|
|
|
switch (cmd) {
|
|
|
|
case CURLIOCMD_NOP:
|
|
|
|
return CURLIOE_OK;
|
|
|
|
|
|
|
|
case CURLIOCMD_RESTARTREAD:
|
|
|
|
buffer->posn = 0;
|
|
|
|
return CURLIOE_OK;
|
|
|
|
|
|
|
|
default:
|
|
|
|
return CURLIOE_UNKNOWNCMD;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2011-05-03 17:47:27 +02:00
|
|
|
size_t fwrite_buffer(char *ptr, size_t eltsize, size_t nmemb, void *buffer_)
|
2005-11-18 20:02:58 +01:00
|
|
|
{
|
|
|
|
size_t size = eltsize * nmemb;
|
2008-07-04 09:37:40 +02:00
|
|
|
struct strbuf *buffer = buffer_;
|
|
|
|
|
2007-12-09 20:30:59 +01:00
|
|
|
strbuf_add(buffer, ptr, size);
|
2005-11-18 20:02:58 +01:00
|
|
|
return size;
|
|
|
|
}
|
|
|
|
|
2011-05-03 17:47:27 +02:00
|
|
|
size_t fwrite_null(char *ptr, size_t eltsize, size_t nmemb, void *strbuf)
|
2005-11-18 20:02:58 +01:00
|
|
|
{
|
|
|
|
return eltsize * nmemb;
|
|
|
|
}
|
|
|
|
|
2015-01-15 00:40:46 +01:00
|
|
|
static void closedown_active_slot(struct active_request_slot *slot)
|
|
|
|
{
|
|
|
|
active_requests--;
|
|
|
|
slot->in_use = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void finish_active_slot(struct active_request_slot *slot)
|
|
|
|
{
|
|
|
|
closedown_active_slot(slot);
|
|
|
|
curl_easy_getinfo(slot->curl, CURLINFO_HTTP_CODE, &slot->http_code);
|
|
|
|
|
|
|
|
if (slot->finished != NULL)
|
|
|
|
(*slot->finished) = 1;
|
|
|
|
|
|
|
|
/* Store slot results so they can be read after the slot is reused */
|
|
|
|
if (slot->results != NULL) {
|
|
|
|
slot->results->curl_result = slot->curl_result;
|
|
|
|
slot->results->http_code = slot->http_code;
|
|
|
|
#if LIBCURL_VERSION_NUM >= 0x070a08
|
|
|
|
curl_easy_getinfo(slot->curl, CURLINFO_HTTPAUTH_AVAIL,
|
|
|
|
&slot->results->auth_avail);
|
|
|
|
#else
|
|
|
|
slot->results->auth_avail = 0;
|
|
|
|
#endif
|
http: use credential API to handle proxy authentication
Currently, the only way to pass proxy credentials to curl is by including them
in the proxy URL. Usually, this means they will end up on disk unencrypted, one
way or another (by inclusion in ~/.gitconfig, shell profile or history). Since
proxy authentication often uses a domain user, credentials can be security
sensitive; therefore, a safer way of passing credentials is desirable.
If the configured proxy contains a username but not a password, query the
credential API for one. Also, make sure we approve/reject proxy credentials
properly.
For consistency reasons, add parsing of http_proxy/https_proxy/all_proxy
environment variables, which would otherwise be evaluated as a fallback by curl.
Without this, we would have different semantics for git configuration and
environment variables.
Helped-by: Junio C Hamano <gitster@pobox.com>
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Helped-by: Elia Pinto <gitter.spiros@gmail.com>
Signed-off-by: Knut Franke <k.franke@science-computing.de>
Signed-off-by: Elia Pinto <gitter.spiros@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-01-26 14:02:48 +01:00
|
|
|
|
|
|
|
curl_easy_getinfo(slot->curl, CURLINFO_HTTP_CONNECTCODE,
|
|
|
|
&slot->results->http_connectcode);
|
2015-01-15 00:40:46 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Run callback if appropriate */
|
|
|
|
if (slot->callback_func != NULL)
|
|
|
|
slot->callback_func(slot->callback_data);
|
|
|
|
}
|
|
|
|
|
2005-11-18 20:02:58 +01:00
|
|
|
#ifdef USE_CURL_MULTI
|
|
|
|
static void process_curl_messages(void)
|
|
|
|
{
|
|
|
|
int num_messages;
|
|
|
|
struct active_request_slot *slot;
|
|
|
|
CURLMsg *curl_message = curl_multi_info_read(curlm, &num_messages);
|
|
|
|
|
|
|
|
while (curl_message != NULL) {
|
|
|
|
if (curl_message->msg == CURLMSG_DONE) {
|
|
|
|
int curl_result = curl_message->data.result;
|
|
|
|
slot = active_queue_head;
|
|
|
|
while (slot != NULL &&
|
|
|
|
slot->curl != curl_message->easy_handle)
|
|
|
|
slot = slot->next;
|
|
|
|
if (slot != NULL) {
|
|
|
|
curl_multi_remove_handle(curlm, slot->curl);
|
|
|
|
slot->curl_result = curl_result;
|
|
|
|
finish_active_slot(slot);
|
|
|
|
} else {
|
|
|
|
fprintf(stderr, "Received DONE message for unknown request!\n");
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
fprintf(stderr, "Unknown CURL message received: %d\n",
|
|
|
|
(int)curl_message->msg);
|
|
|
|
}
|
|
|
|
curl_message = curl_multi_info_read(curlm, &num_messages);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2008-05-14 19:46:53 +02:00
|
|
|
static int http_options(const char *var, const char *value, void *cb)
|
2005-11-18 20:02:58 +01:00
|
|
|
{
|
|
|
|
if (!strcmp("http.sslverify", var)) {
|
http_init(): Fix config file parsing
We honor the command line options, environment variables, variables in
repository configuration file, variables in user's global configuration
file, variables in the system configuration file, and then finally use
built-in default. To implement this semantics, the code should:
- start from built-in default values;
- call git_config() with the configuration parser callback, which
implements "later definition overrides earlier ones" logic
(git_config() reads the system's, user's and then repository's
configuration file in this order);
- override the result from the above with environment variables if set;
- override the result from the above with command line options.
The initialization code http_init() for http transfer got this wrong, and
implemented a "first one wins, ignoring the later ones" in http_options(),
to compensate this mistake, read environment variables before calling
git_config(). This is all wrong.
As a second class citizen, the http codepath hasn't been audited as
closely as other parts of the system, but we should try to bring sanity to
it, before inviting contributors to improve on it.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-10 03:00:30 +01:00
|
|
|
curl_ssl_verify = git_config_bool(var, value);
|
2005-11-18 20:02:58 +01:00
|
|
|
return 0;
|
|
|
|
}
|
2015-05-08 15:22:15 +02:00
|
|
|
if (!strcmp("http.sslcipherlist", var))
|
|
|
|
return git_config_string(&ssl_cipherlist, var, value);
|
2015-08-14 21:37:43 +02:00
|
|
|
if (!strcmp("http.sslversion", var))
|
|
|
|
return git_config_string(&ssl_version, var, value);
|
http_init(): Fix config file parsing
We honor the command line options, environment variables, variables in
repository configuration file, variables in user's global configuration
file, variables in the system configuration file, and then finally use
built-in default. To implement this semantics, the code should:
- start from built-in default values;
- call git_config() with the configuration parser callback, which
implements "later definition overrides earlier ones" logic
(git_config() reads the system's, user's and then repository's
configuration file in this order);
- override the result from the above with environment variables if set;
- override the result from the above with command line options.
The initialization code http_init() for http transfer got this wrong, and
implemented a "first one wins, ignoring the later ones" in http_options(),
to compensate this mistake, read environment variables before calling
git_config(). This is all wrong.
As a second class citizen, the http codepath hasn't been audited as
closely as other parts of the system, but we should try to bring sanity to
it, before inviting contributors to improve on it.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-10 03:00:30 +01:00
|
|
|
if (!strcmp("http.sslcert", var))
|
|
|
|
return git_config_string(&ssl_cert, var, value);
|
2009-06-15 04:39:00 +02:00
|
|
|
#if LIBCURL_VERSION_NUM >= 0x070903
|
http_init(): Fix config file parsing
We honor the command line options, environment variables, variables in
repository configuration file, variables in user's global configuration
file, variables in the system configuration file, and then finally use
built-in default. To implement this semantics, the code should:
- start from built-in default values;
- call git_config() with the configuration parser callback, which
implements "later definition overrides earlier ones" logic
(git_config() reads the system's, user's and then repository's
configuration file in this order);
- override the result from the above with environment variables if set;
- override the result from the above with command line options.
The initialization code http_init() for http transfer got this wrong, and
implemented a "first one wins, ignoring the later ones" in http_options(),
to compensate this mistake, read environment variables before calling
git_config(). This is all wrong.
As a second class citizen, the http codepath hasn't been audited as
closely as other parts of the system, but we should try to bring sanity to
it, before inviting contributors to improve on it.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-10 03:00:30 +01:00
|
|
|
if (!strcmp("http.sslkey", var))
|
|
|
|
return git_config_string(&ssl_key, var, value);
|
2005-11-18 20:02:58 +01:00
|
|
|
#endif
|
|
|
|
#if LIBCURL_VERSION_NUM >= 0x070908
|
http_init(): Fix config file parsing
We honor the command line options, environment variables, variables in
repository configuration file, variables in user's global configuration
file, variables in the system configuration file, and then finally use
built-in default. To implement this semantics, the code should:
- start from built-in default values;
- call git_config() with the configuration parser callback, which
implements "later definition overrides earlier ones" logic
(git_config() reads the system's, user's and then repository's
configuration file in this order);
- override the result from the above with environment variables if set;
- override the result from the above with command line options.
The initialization code http_init() for http transfer got this wrong, and
implemented a "first one wins, ignoring the later ones" in http_options(),
to compensate this mistake, read environment variables before calling
git_config(). This is all wrong.
As a second class citizen, the http codepath hasn't been audited as
closely as other parts of the system, but we should try to bring sanity to
it, before inviting contributors to improve on it.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-10 03:00:30 +01:00
|
|
|
if (!strcmp("http.sslcapath", var))
|
2015-11-23 13:02:40 +01:00
|
|
|
return git_config_pathname(&ssl_capath, var, value);
|
2005-11-18 20:02:58 +01:00
|
|
|
#endif
|
http_init(): Fix config file parsing
We honor the command line options, environment variables, variables in
repository configuration file, variables in user's global configuration
file, variables in the system configuration file, and then finally use
built-in default. To implement this semantics, the code should:
- start from built-in default values;
- call git_config() with the configuration parser callback, which
implements "later definition overrides earlier ones" logic
(git_config() reads the system's, user's and then repository's
configuration file in this order);
- override the result from the above with environment variables if set;
- override the result from the above with command line options.
The initialization code http_init() for http transfer got this wrong, and
implemented a "first one wins, ignoring the later ones" in http_options(),
to compensate this mistake, read environment variables before calling
git_config(). This is all wrong.
As a second class citizen, the http codepath hasn't been audited as
closely as other parts of the system, but we should try to bring sanity to
it, before inviting contributors to improve on it.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-10 03:00:30 +01:00
|
|
|
if (!strcmp("http.sslcainfo", var))
|
2015-11-23 13:02:40 +01:00
|
|
|
return git_config_pathname(&ssl_cainfo, var, value);
|
2009-05-28 05:16:03 +02:00
|
|
|
if (!strcmp("http.sslcertpasswordprotected", var)) {
|
2013-07-12 20:52:47 +02:00
|
|
|
ssl_cert_password_required = git_config_bool(var, value);
|
2009-05-28 05:16:03 +02:00
|
|
|
return 0;
|
|
|
|
}
|
2013-04-07 21:10:39 +02:00
|
|
|
if (!strcmp("http.ssltry", var)) {
|
|
|
|
curl_ssl_try = git_config_bool(var, value);
|
|
|
|
return 0;
|
|
|
|
}
|
2009-11-27 16:42:26 +01:00
|
|
|
if (!strcmp("http.minsessions", var)) {
|
|
|
|
min_curl_sessions = git_config_int(var, value);
|
|
|
|
#ifndef USE_CURL_MULTI
|
|
|
|
if (min_curl_sessions > 1)
|
|
|
|
min_curl_sessions = 1;
|
|
|
|
#endif
|
|
|
|
return 0;
|
|
|
|
}
|
2007-06-07 09:04:01 +02:00
|
|
|
#ifdef USE_CURL_MULTI
|
2005-11-18 20:02:58 +01:00
|
|
|
if (!strcmp("http.maxrequests", var)) {
|
http_init(): Fix config file parsing
We honor the command line options, environment variables, variables in
repository configuration file, variables in user's global configuration
file, variables in the system configuration file, and then finally use
built-in default. To implement this semantics, the code should:
- start from built-in default values;
- call git_config() with the configuration parser callback, which
implements "later definition overrides earlier ones" logic
(git_config() reads the system's, user's and then repository's
configuration file in this order);
- override the result from the above with environment variables if set;
- override the result from the above with command line options.
The initialization code http_init() for http transfer got this wrong, and
implemented a "first one wins, ignoring the later ones" in http_options(),
to compensate this mistake, read environment variables before calling
git_config(). This is all wrong.
As a second class citizen, the http codepath hasn't been audited as
closely as other parts of the system, but we should try to bring sanity to
it, before inviting contributors to improve on it.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-10 03:00:30 +01:00
|
|
|
max_requests = git_config_int(var, value);
|
2005-11-18 20:02:58 +01:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
if (!strcmp("http.lowspeedlimit", var)) {
|
http_init(): Fix config file parsing
We honor the command line options, environment variables, variables in
repository configuration file, variables in user's global configuration
file, variables in the system configuration file, and then finally use
built-in default. To implement this semantics, the code should:
- start from built-in default values;
- call git_config() with the configuration parser callback, which
implements "later definition overrides earlier ones" logic
(git_config() reads the system's, user's and then repository's
configuration file in this order);
- override the result from the above with environment variables if set;
- override the result from the above with command line options.
The initialization code http_init() for http transfer got this wrong, and
implemented a "first one wins, ignoring the later ones" in http_options(),
to compensate this mistake, read environment variables before calling
git_config(). This is all wrong.
As a second class citizen, the http codepath hasn't been audited as
closely as other parts of the system, but we should try to bring sanity to
it, before inviting contributors to improve on it.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-10 03:00:30 +01:00
|
|
|
curl_low_speed_limit = (long)git_config_int(var, value);
|
2005-11-18 20:02:58 +01:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
if (!strcmp("http.lowspeedtime", var)) {
|
http_init(): Fix config file parsing
We honor the command line options, environment variables, variables in
repository configuration file, variables in user's global configuration
file, variables in the system configuration file, and then finally use
built-in default. To implement this semantics, the code should:
- start from built-in default values;
- call git_config() with the configuration parser callback, which
implements "later definition overrides earlier ones" logic
(git_config() reads the system's, user's and then repository's
configuration file in this order);
- override the result from the above with environment variables if set;
- override the result from the above with command line options.
The initialization code http_init() for http transfer got this wrong, and
implemented a "first one wins, ignoring the later ones" in http_options(),
to compensate this mistake, read environment variables before calling
git_config(). This is all wrong.
As a second class citizen, the http codepath hasn't been audited as
closely as other parts of the system, but we should try to bring sanity to
it, before inviting contributors to improve on it.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-10 03:00:30 +01:00
|
|
|
curl_low_speed_time = (long)git_config_int(var, value);
|
2005-11-18 20:02:58 +01:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2006-09-29 02:10:44 +02:00
|
|
|
if (!strcmp("http.noepsv", var)) {
|
|
|
|
curl_ftp_no_epsv = git_config_bool(var, value);
|
|
|
|
return 0;
|
|
|
|
}
|
http_init(): Fix config file parsing
We honor the command line options, environment variables, variables in
repository configuration file, variables in user's global configuration
file, variables in the system configuration file, and then finally use
built-in default. To implement this semantics, the code should:
- start from built-in default values;
- call git_config() with the configuration parser callback, which
implements "later definition overrides earlier ones" logic
(git_config() reads the system's, user's and then repository's
configuration file in this order);
- override the result from the above with environment variables if set;
- override the result from the above with command line options.
The initialization code http_init() for http transfer got this wrong, and
implemented a "first one wins, ignoring the later ones" in http_options(),
to compensate this mistake, read environment variables before calling
git_config(). This is all wrong.
As a second class citizen, the http codepath hasn't been audited as
closely as other parts of the system, but we should try to bring sanity to
it, before inviting contributors to improve on it.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-10 03:00:30 +01:00
|
|
|
if (!strcmp("http.proxy", var))
|
|
|
|
return git_config_string(&curl_http_proxy, var, value);
|
2006-09-29 02:10:44 +02:00
|
|
|
|
2016-01-26 14:02:47 +01:00
|
|
|
if (!strcmp("http.proxyauthmethod", var))
|
|
|
|
return git_config_string(&http_proxy_authmethod, var, value);
|
|
|
|
|
2011-06-02 22:31:25 +02:00
|
|
|
if (!strcmp("http.cookiefile", var))
|
2016-05-04 20:42:15 +02:00
|
|
|
return git_config_pathname(&curl_cookie_file, var, value);
|
2013-07-24 00:40:17 +02:00
|
|
|
if (!strcmp("http.savecookies", var)) {
|
|
|
|
curl_save_cookies = git_config_bool(var, value);
|
|
|
|
return 0;
|
|
|
|
}
|
2011-06-02 22:31:25 +02:00
|
|
|
|
2009-10-31 01:47:41 +01:00
|
|
|
if (!strcmp("http.postbuffer", var)) {
|
|
|
|
http_post_buffer = git_config_int(var, value);
|
|
|
|
if (http_post_buffer < LARGE_PACKET_MAX)
|
|
|
|
http_post_buffer = LARGE_PACKET_MAX;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2010-08-11 22:40:38 +02:00
|
|
|
if (!strcmp("http.useragent", var))
|
|
|
|
return git_config_string(&user_agent, var, value);
|
|
|
|
|
2016-02-15 19:44:46 +01:00
|
|
|
if (!strcmp("http.emptyauth", var)) {
|
|
|
|
curl_empty_auth = git_config_bool(var, value);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-02-15 15:04:22 +01:00
|
|
|
if (!strcmp("http.pinnedpubkey", var)) {
|
|
|
|
#if LIBCURL_VERSION_NUM >= 0x072c00
|
|
|
|
return git_config_pathname(&ssl_pinnedkey, var, value);
|
|
|
|
#else
|
|
|
|
warning(_("Public key pinning not supported with cURL < 7.44.0"));
|
|
|
|
return 0;
|
|
|
|
#endif
|
|
|
|
}
|
2016-02-24 22:25:58 +01:00
|
|
|
|
2016-04-27 14:20:37 +02:00
|
|
|
if (!strcmp("http.extraheader", var)) {
|
|
|
|
if (!value) {
|
|
|
|
return config_error_nonbool(var);
|
|
|
|
} else if (!*value) {
|
|
|
|
curl_slist_free_all(extra_http_headers);
|
|
|
|
extra_http_headers = NULL;
|
|
|
|
} else {
|
|
|
|
extra_http_headers =
|
|
|
|
curl_slist_append(extra_http_headers, value);
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2005-11-18 20:02:58 +01:00
|
|
|
/* Fall back on the default ones */
|
2008-05-14 19:46:53 +02:00
|
|
|
return git_default_config(var, value, cb);
|
2005-11-18 20:02:58 +01:00
|
|
|
}
|
|
|
|
|
2009-03-10 07:34:25 +01:00
|
|
|
static void init_curl_http_auth(CURL *result)
|
|
|
|
{
|
2016-02-15 19:44:46 +01:00
|
|
|
if (!http_auth.username) {
|
|
|
|
if (curl_empty_auth)
|
|
|
|
curl_easy_setopt(result, CURLOPT_USERPWD, ":");
|
2012-04-13 08:19:25 +02:00
|
|
|
return;
|
2016-02-15 19:44:46 +01:00
|
|
|
}
|
2012-04-13 08:19:25 +02:00
|
|
|
|
|
|
|
credential_fill(&http_auth);
|
|
|
|
|
|
|
|
#if LIBCURL_VERSION_NUM >= 0x071301
|
|
|
|
curl_easy_setopt(result, CURLOPT_USERNAME, http_auth.username);
|
|
|
|
curl_easy_setopt(result, CURLOPT_PASSWORD, http_auth.password);
|
|
|
|
#else
|
|
|
|
{
|
http: clean up leak in init_curl_http_auth
When we have a credential to give to curl, we must copy it
into a "user:pass" buffer and then hand the buffer to curl.
Old versions of curl did not copy the buffer, and we were
expected to keep it valid. Newer versions of curl will copy
the buffer.
Our solution was to use a strbuf and detach it, giving
ownership of the resulting buffer to curl. However, this
meant that we were leaking the buffer on newer versions of
curl, since curl was just copying it and throwing away the
string we passed. Furthermore, when we replaced a
credential (e.g., because our original one was rejected), we
were also leaking on both old and new versions of curl.
This got even worse in the last patch, which started
replacing the credential (and thus leaking) on every http
request.
Instead, let's use a static buffer to make the ownership
more clear and less leaky. We already keep a static "struct
credential", so we are only handling a single credential at
a time, anyway.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-13 08:18:35 +02:00
|
|
|
static struct strbuf up = STRBUF_INIT;
|
2013-06-19 04:43:49 +02:00
|
|
|
/*
|
|
|
|
* Note that we assume we only ever have a single set of
|
|
|
|
* credentials in a given program run, so we do not have
|
|
|
|
* to worry about updating this buffer, only setting its
|
|
|
|
* initial value.
|
|
|
|
*/
|
|
|
|
if (!up.len)
|
|
|
|
strbuf_addf(&up, "%s:%s",
|
|
|
|
http_auth.username, http_auth.password);
|
http: clean up leak in init_curl_http_auth
When we have a credential to give to curl, we must copy it
into a "user:pass" buffer and then hand the buffer to curl.
Old versions of curl did not copy the buffer, and we were
expected to keep it valid. Newer versions of curl will copy
the buffer.
Our solution was to use a strbuf and detach it, giving
ownership of the resulting buffer to curl. However, this
meant that we were leaking the buffer on newer versions of
curl, since curl was just copying it and throwing away the
string we passed. Furthermore, when we replaced a
credential (e.g., because our original one was rejected), we
were also leaking on both old and new versions of curl.
This got even worse in the last patch, which started
replacing the credential (and thus leaking) on every http
request.
Instead, let's use a static buffer to make the ownership
more clear and less leaky. We already keep a static "struct
credential", so we are only handling a single credential at
a time, anyway.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-13 08:18:35 +02:00
|
|
|
curl_easy_setopt(result, CURLOPT_USERPWD, up.buf);
|
2009-03-10 07:34:25 +01:00
|
|
|
}
|
2012-04-13 08:19:25 +02:00
|
|
|
#endif
|
2009-03-10 07:34:25 +01:00
|
|
|
}
|
|
|
|
|
2016-01-26 14:02:47 +01:00
|
|
|
/* *var must be free-able */
|
|
|
|
static void var_override(const char **var, char *value)
|
|
|
|
{
|
|
|
|
if (value) {
|
|
|
|
free((void *)*var);
|
|
|
|
*var = xstrdup(value);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
http: use credential API to handle proxy authentication
Currently, the only way to pass proxy credentials to curl is by including them
in the proxy URL. Usually, this means they will end up on disk unencrypted, one
way or another (by inclusion in ~/.gitconfig, shell profile or history). Since
proxy authentication often uses a domain user, credentials can be security
sensitive; therefore, a safer way of passing credentials is desirable.
If the configured proxy contains a username but not a password, query the
credential API for one. Also, make sure we approve/reject proxy credentials
properly.
For consistency reasons, add parsing of http_proxy/https_proxy/all_proxy
environment variables, which would otherwise be evaluated as a fallback by curl.
Without this, we would have different semantics for git configuration and
environment variables.
Helped-by: Junio C Hamano <gitster@pobox.com>
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Helped-by: Elia Pinto <gitter.spiros@gmail.com>
Signed-off-by: Knut Franke <k.franke@science-computing.de>
Signed-off-by: Elia Pinto <gitter.spiros@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-01-26 14:02:48 +01:00
|
|
|
static void set_proxyauth_name_password(CURL *result)
|
|
|
|
{
|
|
|
|
#if LIBCURL_VERSION_NUM >= 0x071301
|
|
|
|
curl_easy_setopt(result, CURLOPT_PROXYUSERNAME,
|
|
|
|
proxy_auth.username);
|
|
|
|
curl_easy_setopt(result, CURLOPT_PROXYPASSWORD,
|
|
|
|
proxy_auth.password);
|
|
|
|
#else
|
|
|
|
struct strbuf s = STRBUF_INIT;
|
|
|
|
|
|
|
|
strbuf_addstr_urlencode(&s, proxy_auth.username, 1);
|
|
|
|
strbuf_addch(&s, ':');
|
|
|
|
strbuf_addstr_urlencode(&s, proxy_auth.password, 1);
|
|
|
|
curl_proxyuserpwd = strbuf_detach(&s, NULL);
|
|
|
|
curl_easy_setopt(result, CURLOPT_PROXYUSERPWD, curl_proxyuserpwd);
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2016-01-26 14:02:47 +01:00
|
|
|
static void init_curl_proxy_auth(CURL *result)
|
|
|
|
{
|
http: use credential API to handle proxy authentication
Currently, the only way to pass proxy credentials to curl is by including them
in the proxy URL. Usually, this means they will end up on disk unencrypted, one
way or another (by inclusion in ~/.gitconfig, shell profile or history). Since
proxy authentication often uses a domain user, credentials can be security
sensitive; therefore, a safer way of passing credentials is desirable.
If the configured proxy contains a username but not a password, query the
credential API for one. Also, make sure we approve/reject proxy credentials
properly.
For consistency reasons, add parsing of http_proxy/https_proxy/all_proxy
environment variables, which would otherwise be evaluated as a fallback by curl.
Without this, we would have different semantics for git configuration and
environment variables.
Helped-by: Junio C Hamano <gitster@pobox.com>
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Helped-by: Elia Pinto <gitter.spiros@gmail.com>
Signed-off-by: Knut Franke <k.franke@science-computing.de>
Signed-off-by: Elia Pinto <gitter.spiros@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-01-26 14:02:48 +01:00
|
|
|
if (proxy_auth.username) {
|
|
|
|
if (!proxy_auth.password)
|
|
|
|
credential_fill(&proxy_auth);
|
|
|
|
set_proxyauth_name_password(result);
|
|
|
|
}
|
|
|
|
|
2016-01-26 14:02:47 +01:00
|
|
|
var_override(&http_proxy_authmethod, getenv("GIT_HTTP_PROXY_AUTHMETHOD"));
|
|
|
|
|
|
|
|
#if LIBCURL_VERSION_NUM >= 0x070a07 /* CURLOPT_PROXYAUTH and CURLAUTH_ANY */
|
|
|
|
if (http_proxy_authmethod) {
|
|
|
|
int i;
|
|
|
|
for (i = 0; i < ARRAY_SIZE(proxy_authmethods); i++) {
|
|
|
|
if (!strcmp(http_proxy_authmethod, proxy_authmethods[i].name)) {
|
|
|
|
curl_easy_setopt(result, CURLOPT_PROXYAUTH,
|
|
|
|
proxy_authmethods[i].curlauth_param);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (i == ARRAY_SIZE(proxy_authmethods)) {
|
|
|
|
warning("unsupported proxy authentication method %s: using anyauth",
|
|
|
|
http_proxy_authmethod);
|
|
|
|
curl_easy_setopt(result, CURLOPT_PROXYAUTH, CURLAUTH_ANY);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
curl_easy_setopt(result, CURLOPT_PROXYAUTH, CURLAUTH_ANY);
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2009-05-28 05:16:02 +02:00
|
|
|
static int has_cert_password(void)
|
|
|
|
{
|
|
|
|
if (ssl_cert == NULL || ssl_cert_password_required != 1)
|
|
|
|
return 0;
|
http: use credential API to get passwords
This patch converts the http code to use the new credential
API, both for http authentication as well as for getting
certificate passwords.
Most of the code change is simply variable naming (the
passwords are now contained inside the credential struct)
or deletion of obsolete code (the credential code handles
URL parsing and prompting for us).
The behavior should be the same, with one exception: the
credential code will prompt with a description based on the
credential components. Therefore, the old prompt of:
Username for 'example.com':
Password for 'example.com':
now looks like:
Username for 'https://example.com/repo.git':
Password for 'https://user@example.com/repo.git':
Note that we include more information in each line,
specifically:
1. We now include the protocol. While more noisy, this is
an important part of knowing what you are accessing
(especially if you care about http vs https).
2. We include the username in the password prompt. This is
not a big deal when you have just been prompted for it,
but the username may also come from the remote's URL
(and after future patches, from configuration or
credential helpers). In that case, it's a nice
reminder of the user for which you're giving the
password.
3. We include the path component of the URL. In many
cases, the user won't care about this and it's simply
noise (i.e., they'll use the same credential for a
whole site). However, that is part of a larger
question, which is whether path components should be
part of credential context, both for prompting and for
lookup by storage helpers. That issue will be addressed
as a whole in a future patch.
Similarly, for unlocking certificates, we used to say:
Certificate Password for 'example.com':
and we now say:
Password for 'cert:///path/to/certificate':
Showing the path to the client certificate makes more sense,
as that is what you are unlocking, not "example.com".
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-12-10 11:31:21 +01:00
|
|
|
if (!cert_auth.password) {
|
|
|
|
cert_auth.protocol = xstrdup("cert");
|
2012-12-21 17:31:19 +01:00
|
|
|
cert_auth.username = xstrdup("");
|
http: use credential API to get passwords
This patch converts the http code to use the new credential
API, both for http authentication as well as for getting
certificate passwords.
Most of the code change is simply variable naming (the
passwords are now contained inside the credential struct)
or deletion of obsolete code (the credential code handles
URL parsing and prompting for us).
The behavior should be the same, with one exception: the
credential code will prompt with a description based on the
credential components. Therefore, the old prompt of:
Username for 'example.com':
Password for 'example.com':
now looks like:
Username for 'https://example.com/repo.git':
Password for 'https://user@example.com/repo.git':
Note that we include more information in each line,
specifically:
1. We now include the protocol. While more noisy, this is
an important part of knowing what you are accessing
(especially if you care about http vs https).
2. We include the username in the password prompt. This is
not a big deal when you have just been prompted for it,
but the username may also come from the remote's URL
(and after future patches, from configuration or
credential helpers). In that case, it's a nice
reminder of the user for which you're giving the
password.
3. We include the path component of the URL. In many
cases, the user won't care about this and it's simply
noise (i.e., they'll use the same credential for a
whole site). However, that is part of a larger
question, which is whether path components should be
part of credential context, both for prompting and for
lookup by storage helpers. That issue will be addressed
as a whole in a future patch.
Similarly, for unlocking certificates, we used to say:
Certificate Password for 'example.com':
and we now say:
Password for 'cert:///path/to/certificate':
Showing the path to the client certificate makes more sense,
as that is what you are unlocking, not "example.com".
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-12-10 11:31:21 +01:00
|
|
|
cert_auth.path = xstrdup(ssl_cert);
|
|
|
|
credential_fill(&cert_auth);
|
|
|
|
}
|
|
|
|
return 1;
|
2009-05-28 05:16:02 +02:00
|
|
|
}
|
|
|
|
|
2013-10-15 02:06:14 +02:00
|
|
|
#if LIBCURL_VERSION_NUM >= 0x071900
|
|
|
|
static void set_curl_keepalive(CURL *c)
|
|
|
|
{
|
|
|
|
curl_easy_setopt(c, CURLOPT_TCP_KEEPALIVE, 1);
|
|
|
|
}
|
|
|
|
|
|
|
|
#elif LIBCURL_VERSION_NUM >= 0x071000
|
2013-10-13 00:29:40 +02:00
|
|
|
static int sockopt_callback(void *client, curl_socket_t fd, curlsocktype type)
|
|
|
|
{
|
|
|
|
int ka = 1;
|
|
|
|
int rc;
|
|
|
|
socklen_t len = (socklen_t)sizeof(ka);
|
|
|
|
|
|
|
|
if (type != CURLSOCKTYPE_IPCXN)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
rc = setsockopt(fd, SOL_SOCKET, SO_KEEPALIVE, (void *)&ka, len);
|
|
|
|
if (rc < 0)
|
2016-05-08 11:47:48 +02:00
|
|
|
warning_errno("unable to set SO_KEEPALIVE on socket");
|
2013-10-13 00:29:40 +02:00
|
|
|
|
|
|
|
return 0; /* CURL_SOCKOPT_OK only exists since curl 7.21.5 */
|
|
|
|
}
|
|
|
|
|
2013-10-15 02:06:14 +02:00
|
|
|
static void set_curl_keepalive(CURL *c)
|
|
|
|
{
|
|
|
|
curl_easy_setopt(c, CURLOPT_SOCKOPTFUNCTION, sockopt_callback);
|
|
|
|
}
|
|
|
|
|
|
|
|
#else
|
|
|
|
static void set_curl_keepalive(CURL *c)
|
|
|
|
{
|
|
|
|
/* not supported on older curl versions */
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2016-05-23 15:44:02 +02:00
|
|
|
static void redact_sensitive_header(struct strbuf *header)
|
|
|
|
{
|
|
|
|
const char *sensitive_header;
|
|
|
|
|
|
|
|
if (skip_prefix(header->buf, "Authorization:", &sensitive_header) ||
|
|
|
|
skip_prefix(header->buf, "Proxy-Authorization:", &sensitive_header)) {
|
|
|
|
/* The first token is the type, which is OK to log */
|
|
|
|
while (isspace(*sensitive_header))
|
|
|
|
sensitive_header++;
|
|
|
|
while (*sensitive_header && !isspace(*sensitive_header))
|
|
|
|
sensitive_header++;
|
|
|
|
/* Everything else is opaque and possibly sensitive */
|
|
|
|
strbuf_setlen(header, sensitive_header - header->buf);
|
|
|
|
strbuf_addstr(header, " <redacted>");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void curl_dump_header(const char *text, unsigned char *ptr, size_t size, int hide_sensitive_header)
|
|
|
|
{
|
|
|
|
struct strbuf out = STRBUF_INIT;
|
|
|
|
struct strbuf **headers, **header;
|
|
|
|
|
|
|
|
strbuf_addf(&out, "%s, %10.10ld bytes (0x%8.8lx)\n",
|
|
|
|
text, (long)size, (long)size);
|
|
|
|
trace_strbuf(&trace_curl, &out);
|
|
|
|
strbuf_reset(&out);
|
|
|
|
strbuf_add(&out, ptr, size);
|
|
|
|
headers = strbuf_split_max(&out, '\n', 0);
|
|
|
|
|
|
|
|
for (header = headers; *header; header++) {
|
|
|
|
if (hide_sensitive_header)
|
|
|
|
redact_sensitive_header(*header);
|
|
|
|
strbuf_insert((*header), 0, text, strlen(text));
|
|
|
|
strbuf_insert((*header), strlen(text), ": ", 2);
|
|
|
|
strbuf_rtrim((*header));
|
|
|
|
strbuf_addch((*header), '\n');
|
|
|
|
trace_strbuf(&trace_curl, (*header));
|
|
|
|
}
|
|
|
|
strbuf_list_free(headers);
|
|
|
|
strbuf_release(&out);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void curl_dump_data(const char *text, unsigned char *ptr, size_t size)
|
|
|
|
{
|
|
|
|
size_t i;
|
|
|
|
struct strbuf out = STRBUF_INIT;
|
|
|
|
unsigned int width = 60;
|
|
|
|
|
|
|
|
strbuf_addf(&out, "%s, %10.10ld bytes (0x%8.8lx)\n",
|
|
|
|
text, (long)size, (long)size);
|
|
|
|
trace_strbuf(&trace_curl, &out);
|
|
|
|
|
|
|
|
for (i = 0; i < size; i += width) {
|
|
|
|
size_t w;
|
|
|
|
|
|
|
|
strbuf_reset(&out);
|
|
|
|
strbuf_addf(&out, "%s: ", text);
|
|
|
|
for (w = 0; (w < width) && (i + w < size); w++) {
|
|
|
|
unsigned char ch = ptr[i + w];
|
|
|
|
|
|
|
|
strbuf_addch(&out,
|
|
|
|
(ch >= 0x20) && (ch < 0x80)
|
|
|
|
? ch : '.');
|
|
|
|
}
|
|
|
|
strbuf_addch(&out, '\n');
|
|
|
|
trace_strbuf(&trace_curl, &out);
|
|
|
|
}
|
|
|
|
strbuf_release(&out);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int curl_trace(CURL *handle, curl_infotype type, char *data, size_t size, void *userp)
|
|
|
|
{
|
|
|
|
const char *text;
|
|
|
|
enum { NO_FILTER = 0, DO_FILTER = 1 };
|
|
|
|
|
|
|
|
switch (type) {
|
|
|
|
case CURLINFO_TEXT:
|
|
|
|
trace_printf_key(&trace_curl, "== Info: %s", data);
|
|
|
|
default: /* we ignore unknown types by default */
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
case CURLINFO_HEADER_OUT:
|
|
|
|
text = "=> Send header";
|
|
|
|
curl_dump_header(text, (unsigned char *)data, size, DO_FILTER);
|
|
|
|
break;
|
|
|
|
case CURLINFO_DATA_OUT:
|
|
|
|
text = "=> Send data";
|
|
|
|
curl_dump_data(text, (unsigned char *)data, size);
|
|
|
|
break;
|
|
|
|
case CURLINFO_SSL_DATA_OUT:
|
|
|
|
text = "=> Send SSL data";
|
|
|
|
curl_dump_data(text, (unsigned char *)data, size);
|
|
|
|
break;
|
|
|
|
case CURLINFO_HEADER_IN:
|
|
|
|
text = "<= Recv header";
|
|
|
|
curl_dump_header(text, (unsigned char *)data, size, NO_FILTER);
|
|
|
|
break;
|
|
|
|
case CURLINFO_DATA_IN:
|
|
|
|
text = "<= Recv data";
|
|
|
|
curl_dump_data(text, (unsigned char *)data, size);
|
|
|
|
break;
|
|
|
|
case CURLINFO_SSL_DATA_IN:
|
|
|
|
text = "<= Recv SSL data";
|
|
|
|
curl_dump_data(text, (unsigned char *)data, size);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
void setup_curl_trace(CURL *handle)
|
|
|
|
{
|
|
|
|
if (!trace_want(&trace_curl))
|
|
|
|
return;
|
|
|
|
curl_easy_setopt(handle, CURLOPT_VERBOSE, 1L);
|
|
|
|
curl_easy_setopt(handle, CURLOPT_DEBUGFUNCTION, curl_trace);
|
|
|
|
curl_easy_setopt(handle, CURLOPT_DEBUGDATA, NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2009-03-10 02:47:29 +01:00
|
|
|
static CURL *get_curl_handle(void)
|
2005-11-19 02:06:46 +01:00
|
|
|
{
|
2009-03-10 02:47:29 +01:00
|
|
|
CURL *result = curl_easy_init();
|
http: limit redirection to protocol-whitelist
Previously, libcurl would follow redirection to any protocol
it was compiled for support with. This is desirable to allow
redirection from HTTP to HTTPS. However, it would even
successfully allow redirection from HTTP to SFTP, a protocol
that git does not otherwise support at all. Furthermore
git's new protocol-whitelisting could be bypassed by
following a redirect within the remote helper, as it was
only enforced at transport selection time.
This patch limits redirects within libcurl to HTTP, HTTPS,
FTP and FTPS. If there is a protocol-whitelist present, this
list is limited to those also allowed by the whitelist. As
redirection happens from within libcurl, it is impossible
for an HTTP redirect to a protocol implemented within
another remote helper.
When the curl version git was compiled with is too old to
support restrictions on protocol redirection, we warn the
user if GIT_ALLOW_PROTOCOL restrictions were requested. This
is a little inaccurate, as even without that variable in the
environment, we would still restrict SFTP, etc, and we do
not warn in that case. But anything else means we would
literally warn every time git accesses an http remote.
This commit includes a test, but it is not as robust as we
would hope. It redirects an http request to ftp, and checks
that curl complained about the protocol, which means that we
are relying on curl's specific error message to know what
happened. Ideally we would redirect to a working ftp server
and confirm that we can clone without protocol restrictions,
and not with them. But we do not have a portable way of
providing an ftp server, nor any other protocol that curl
supports (https is the closest, but we would have to deal
with certificates).
[jk: added test and version warning]
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-09-23 00:06:04 +02:00
|
|
|
long allowed_protocols = 0;
|
2005-11-19 02:06:46 +01:00
|
|
|
|
2014-08-13 19:31:24 +02:00
|
|
|
if (!result)
|
|
|
|
die("curl_easy_init failed");
|
|
|
|
|
2008-02-22 00:10:37 +01:00
|
|
|
if (!curl_ssl_verify) {
|
|
|
|
curl_easy_setopt(result, CURLOPT_SSL_VERIFYPEER, 0);
|
|
|
|
curl_easy_setopt(result, CURLOPT_SSL_VERIFYHOST, 0);
|
|
|
|
} else {
|
|
|
|
/* Verify authenticity of the peer's certificate */
|
|
|
|
curl_easy_setopt(result, CURLOPT_SSL_VERIFYPEER, 1);
|
|
|
|
/* The name in the cert must match whom we tried to connect */
|
|
|
|
curl_easy_setopt(result, CURLOPT_SSL_VERIFYHOST, 2);
|
|
|
|
}
|
|
|
|
|
2005-11-19 02:06:46 +01:00
|
|
|
#if LIBCURL_VERSION_NUM >= 0x070907
|
|
|
|
curl_easy_setopt(result, CURLOPT_NETRC, CURL_NETRC_OPTIONAL);
|
|
|
|
#endif
|
2009-11-27 16:43:08 +01:00
|
|
|
#ifdef LIBCURL_CAN_HANDLE_AUTH_ANY
|
2009-12-28 19:04:24 +01:00
|
|
|
curl_easy_setopt(result, CURLOPT_HTTPAUTH, CURLAUTH_ANY);
|
2009-11-27 16:43:08 +01:00
|
|
|
#endif
|
2005-11-19 02:06:46 +01:00
|
|
|
|
2011-12-14 01:11:56 +01:00
|
|
|
if (http_proactive_auth)
|
|
|
|
init_curl_http_auth(result);
|
|
|
|
|
2015-08-14 21:37:43 +02:00
|
|
|
if (getenv("GIT_SSL_VERSION"))
|
|
|
|
ssl_version = getenv("GIT_SSL_VERSION");
|
|
|
|
if (ssl_version && *ssl_version) {
|
|
|
|
int i;
|
|
|
|
for (i = 0; i < ARRAY_SIZE(sslversions); i++) {
|
|
|
|
if (!strcmp(ssl_version, sslversions[i].name)) {
|
|
|
|
curl_easy_setopt(result, CURLOPT_SSLVERSION,
|
|
|
|
sslversions[i].ssl_version);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (i == ARRAY_SIZE(sslversions))
|
|
|
|
warning("unsupported ssl version %s: using default",
|
|
|
|
ssl_version);
|
|
|
|
}
|
|
|
|
|
2015-05-08 15:22:15 +02:00
|
|
|
if (getenv("GIT_SSL_CIPHER_LIST"))
|
|
|
|
ssl_cipherlist = getenv("GIT_SSL_CIPHER_LIST");
|
|
|
|
if (ssl_cipherlist != NULL && *ssl_cipherlist)
|
|
|
|
curl_easy_setopt(result, CURLOPT_SSL_CIPHER_LIST,
|
|
|
|
ssl_cipherlist);
|
|
|
|
|
2005-11-19 02:06:46 +01:00
|
|
|
if (ssl_cert != NULL)
|
|
|
|
curl_easy_setopt(result, CURLOPT_SSLCERT, ssl_cert);
|
2009-05-28 05:16:02 +02:00
|
|
|
if (has_cert_password())
|
http: use credential API to get passwords
This patch converts the http code to use the new credential
API, both for http authentication as well as for getting
certificate passwords.
Most of the code change is simply variable naming (the
passwords are now contained inside the credential struct)
or deletion of obsolete code (the credential code handles
URL parsing and prompting for us).
The behavior should be the same, with one exception: the
credential code will prompt with a description based on the
credential components. Therefore, the old prompt of:
Username for 'example.com':
Password for 'example.com':
now looks like:
Username for 'https://example.com/repo.git':
Password for 'https://user@example.com/repo.git':
Note that we include more information in each line,
specifically:
1. We now include the protocol. While more noisy, this is
an important part of knowing what you are accessing
(especially if you care about http vs https).
2. We include the username in the password prompt. This is
not a big deal when you have just been prompted for it,
but the username may also come from the remote's URL
(and after future patches, from configuration or
credential helpers). In that case, it's a nice
reminder of the user for which you're giving the
password.
3. We include the path component of the URL. In many
cases, the user won't care about this and it's simply
noise (i.e., they'll use the same credential for a
whole site). However, that is part of a larger
question, which is whether path components should be
part of credential context, both for prompting and for
lookup by storage helpers. That issue will be addressed
as a whole in a future patch.
Similarly, for unlocking certificates, we used to say:
Certificate Password for 'example.com':
and we now say:
Password for 'cert:///path/to/certificate':
Showing the path to the client certificate makes more sense,
as that is what you are unlocking, not "example.com".
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-12-10 11:31:21 +01:00
|
|
|
curl_easy_setopt(result, CURLOPT_KEYPASSWD, cert_auth.password);
|
2009-06-15 04:39:00 +02:00
|
|
|
#if LIBCURL_VERSION_NUM >= 0x070903
|
2005-11-19 02:06:46 +01:00
|
|
|
if (ssl_key != NULL)
|
|
|
|
curl_easy_setopt(result, CURLOPT_SSLKEY, ssl_key);
|
|
|
|
#endif
|
|
|
|
#if LIBCURL_VERSION_NUM >= 0x070908
|
|
|
|
if (ssl_capath != NULL)
|
|
|
|
curl_easy_setopt(result, CURLOPT_CAPATH, ssl_capath);
|
2016-02-15 15:04:22 +01:00
|
|
|
#endif
|
|
|
|
#if LIBCURL_VERSION_NUM >= 0x072c00
|
|
|
|
if (ssl_pinnedkey != NULL)
|
|
|
|
curl_easy_setopt(result, CURLOPT_PINNEDPUBLICKEY, ssl_pinnedkey);
|
2005-11-19 02:06:46 +01:00
|
|
|
#endif
|
|
|
|
if (ssl_cainfo != NULL)
|
|
|
|
curl_easy_setopt(result, CURLOPT_CAINFO, ssl_cainfo);
|
|
|
|
|
|
|
|
if (curl_low_speed_limit > 0 && curl_low_speed_time > 0) {
|
|
|
|
curl_easy_setopt(result, CURLOPT_LOW_SPEED_LIMIT,
|
|
|
|
curl_low_speed_limit);
|
|
|
|
curl_easy_setopt(result, CURLOPT_LOW_SPEED_TIME,
|
|
|
|
curl_low_speed_time);
|
|
|
|
}
|
|
|
|
|
|
|
|
curl_easy_setopt(result, CURLOPT_FOLLOWLOCATION, 1);
|
2015-09-23 00:06:20 +02:00
|
|
|
curl_easy_setopt(result, CURLOPT_MAXREDIRS, 20);
|
2010-09-25 06:20:35 +02:00
|
|
|
#if LIBCURL_VERSION_NUM >= 0x071301
|
|
|
|
curl_easy_setopt(result, CURLOPT_POSTREDIR, CURL_REDIR_POST_ALL);
|
|
|
|
#elif LIBCURL_VERSION_NUM >= 0x071101
|
|
|
|
curl_easy_setopt(result, CURLOPT_POST301, 1);
|
|
|
|
#endif
|
http: limit redirection to protocol-whitelist
Previously, libcurl would follow redirection to any protocol
it was compiled for support with. This is desirable to allow
redirection from HTTP to HTTPS. However, it would even
successfully allow redirection from HTTP to SFTP, a protocol
that git does not otherwise support at all. Furthermore
git's new protocol-whitelisting could be bypassed by
following a redirect within the remote helper, as it was
only enforced at transport selection time.
This patch limits redirects within libcurl to HTTP, HTTPS,
FTP and FTPS. If there is a protocol-whitelist present, this
list is limited to those also allowed by the whitelist. As
redirection happens from within libcurl, it is impossible
for an HTTP redirect to a protocol implemented within
another remote helper.
When the curl version git was compiled with is too old to
support restrictions on protocol redirection, we warn the
user if GIT_ALLOW_PROTOCOL restrictions were requested. This
is a little inaccurate, as even without that variable in the
environment, we would still restrict SFTP, etc, and we do
not warn in that case. But anything else means we would
literally warn every time git accesses an http remote.
This commit includes a test, but it is not as robust as we
would hope. It redirects an http request to ftp, and checks
that curl complained about the protocol, which means that we
are relying on curl's specific error message to know what
happened. Ideally we would redirect to a working ftp server
and confirm that we can clone without protocol restrictions,
and not with them. But we do not have a portable way of
providing an ftp server, nor any other protocol that curl
supports (https is the closest, but we would have to deal
with certificates).
[jk: added test and version warning]
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-09-23 00:06:04 +02:00
|
|
|
#if LIBCURL_VERSION_NUM >= 0x071304
|
|
|
|
if (is_transport_allowed("http"))
|
|
|
|
allowed_protocols |= CURLPROTO_HTTP;
|
|
|
|
if (is_transport_allowed("https"))
|
|
|
|
allowed_protocols |= CURLPROTO_HTTPS;
|
|
|
|
if (is_transport_allowed("ftp"))
|
|
|
|
allowed_protocols |= CURLPROTO_FTP;
|
|
|
|
if (is_transport_allowed("ftps"))
|
|
|
|
allowed_protocols |= CURLPROTO_FTPS;
|
|
|
|
curl_easy_setopt(result, CURLOPT_REDIR_PROTOCOLS, allowed_protocols);
|
|
|
|
#else
|
|
|
|
if (transport_restrict_protocols())
|
|
|
|
warning("protocol restrictions not applied to curl redirects because\n"
|
|
|
|
"your curl version is too old (>= 7.19.4)");
|
|
|
|
#endif
|
2006-02-01 12:44:37 +01:00
|
|
|
if (getenv("GIT_CURL_VERBOSE"))
|
2016-05-23 15:44:02 +02:00
|
|
|
curl_easy_setopt(result, CURLOPT_VERBOSE, 1L);
|
|
|
|
setup_curl_trace(result);
|
2006-02-01 12:44:37 +01:00
|
|
|
|
2010-08-11 22:40:38 +02:00
|
|
|
curl_easy_setopt(result, CURLOPT_USERAGENT,
|
2012-06-02 21:03:08 +02:00
|
|
|
user_agent ? user_agent : git_user_agent());
|
2006-04-04 19:11:29 +02:00
|
|
|
|
2006-09-29 02:10:44 +02:00
|
|
|
if (curl_ftp_no_epsv)
|
|
|
|
curl_easy_setopt(result, CURLOPT_FTP_USE_EPSV, 0);
|
|
|
|
|
2013-04-07 21:10:39 +02:00
|
|
|
#ifdef CURLOPT_USE_SSL
|
|
|
|
if (curl_ssl_try)
|
|
|
|
curl_easy_setopt(result, CURLOPT_USE_SSL, CURLUSESSL_TRY);
|
|
|
|
#endif
|
|
|
|
|
http: use credential API to handle proxy authentication
Currently, the only way to pass proxy credentials to curl is by including them
in the proxy URL. Usually, this means they will end up on disk unencrypted, one
way or another (by inclusion in ~/.gitconfig, shell profile or history). Since
proxy authentication often uses a domain user, credentials can be security
sensitive; therefore, a safer way of passing credentials is desirable.
If the configured proxy contains a username but not a password, query the
credential API for one. Also, make sure we approve/reject proxy credentials
properly.
For consistency reasons, add parsing of http_proxy/https_proxy/all_proxy
environment variables, which would otherwise be evaluated as a fallback by curl.
Without this, we would have different semantics for git configuration and
environment variables.
Helped-by: Junio C Hamano <gitster@pobox.com>
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Helped-by: Elia Pinto <gitter.spiros@gmail.com>
Signed-off-by: Knut Franke <k.franke@science-computing.de>
Signed-off-by: Elia Pinto <gitter.spiros@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-01-26 14:02:48 +01:00
|
|
|
/*
|
|
|
|
* CURL also examines these variables as a fallback; but we need to query
|
|
|
|
* them here in order to decide whether to prompt for missing password (cf.
|
|
|
|
* init_curl_proxy_auth()).
|
|
|
|
*
|
|
|
|
* Unlike many other common environment variables, these are historically
|
|
|
|
* lowercase only. It appears that CURL did not know this and implemented
|
|
|
|
* only uppercase variants, which was later corrected to take both - with
|
|
|
|
* the exception of http_proxy, which is lowercase only also in CURL. As
|
|
|
|
* the lowercase versions are the historical quasi-standard, they take
|
|
|
|
* precedence here, as in CURL.
|
|
|
|
*/
|
|
|
|
if (!curl_http_proxy) {
|
|
|
|
if (!strcmp(http_auth.protocol, "https")) {
|
|
|
|
var_override(&curl_http_proxy, getenv("HTTPS_PROXY"));
|
|
|
|
var_override(&curl_http_proxy, getenv("https_proxy"));
|
|
|
|
} else {
|
|
|
|
var_override(&curl_http_proxy, getenv("http_proxy"));
|
|
|
|
}
|
|
|
|
if (!curl_http_proxy) {
|
|
|
|
var_override(&curl_http_proxy, getenv("ALL_PROXY"));
|
|
|
|
var_override(&curl_http_proxy, getenv("all_proxy"));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-03-02 14:55:57 +01:00
|
|
|
if (curl_http_proxy) {
|
2007-11-23 01:07:00 +01:00
|
|
|
curl_easy_setopt(result, CURLOPT_PROXY, curl_http_proxy);
|
2015-10-26 14:15:07 +01:00
|
|
|
#if LIBCURL_VERSION_NUM >= 0x071800
|
2016-04-08 21:16:06 +02:00
|
|
|
if (starts_with(curl_http_proxy, "socks5h"))
|
|
|
|
curl_easy_setopt(result,
|
|
|
|
CURLOPT_PROXYTYPE, CURLPROXY_SOCKS5_HOSTNAME);
|
|
|
|
else if (starts_with(curl_http_proxy, "socks5"))
|
2015-10-26 14:15:07 +01:00
|
|
|
curl_easy_setopt(result,
|
|
|
|
CURLOPT_PROXYTYPE, CURLPROXY_SOCKS5);
|
|
|
|
else if (starts_with(curl_http_proxy, "socks4a"))
|
|
|
|
curl_easy_setopt(result,
|
|
|
|
CURLOPT_PROXYTYPE, CURLPROXY_SOCKS4A);
|
|
|
|
else if (starts_with(curl_http_proxy, "socks"))
|
|
|
|
curl_easy_setopt(result,
|
|
|
|
CURLOPT_PROXYTYPE, CURLPROXY_SOCKS4);
|
|
|
|
#endif
|
http: use credential API to handle proxy authentication
Currently, the only way to pass proxy credentials to curl is by including them
in the proxy URL. Usually, this means they will end up on disk unencrypted, one
way or another (by inclusion in ~/.gitconfig, shell profile or history). Since
proxy authentication often uses a domain user, credentials can be security
sensitive; therefore, a safer way of passing credentials is desirable.
If the configured proxy contains a username but not a password, query the
credential API for one. Also, make sure we approve/reject proxy credentials
properly.
For consistency reasons, add parsing of http_proxy/https_proxy/all_proxy
environment variables, which would otherwise be evaluated as a fallback by curl.
Without this, we would have different semantics for git configuration and
environment variables.
Helped-by: Junio C Hamano <gitster@pobox.com>
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Helped-by: Elia Pinto <gitter.spiros@gmail.com>
Signed-off-by: Knut Franke <k.franke@science-computing.de>
Signed-off-by: Elia Pinto <gitter.spiros@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-01-26 14:02:48 +01:00
|
|
|
if (strstr(curl_http_proxy, "://"))
|
|
|
|
credential_from_url(&proxy_auth, curl_http_proxy);
|
|
|
|
else {
|
|
|
|
struct strbuf url = STRBUF_INIT;
|
|
|
|
strbuf_addf(&url, "http://%s", curl_http_proxy);
|
|
|
|
credential_from_url(&proxy_auth, url.buf);
|
|
|
|
strbuf_release(&url);
|
|
|
|
}
|
|
|
|
|
|
|
|
curl_easy_setopt(result, CURLOPT_PROXY, proxy_auth.host);
|
2016-02-29 16:16:57 +01:00
|
|
|
#if LIBCURL_VERSION_NUM >= 0x071304
|
|
|
|
var_override(&curl_no_proxy, getenv("NO_PROXY"));
|
|
|
|
var_override(&curl_no_proxy, getenv("no_proxy"));
|
|
|
|
curl_easy_setopt(result, CURLOPT_NOPROXY, curl_no_proxy);
|
|
|
|
#endif
|
2015-06-26 20:19:04 +02:00
|
|
|
}
|
2016-01-26 14:02:47 +01:00
|
|
|
init_curl_proxy_auth(result);
|
2007-11-23 01:07:00 +01:00
|
|
|
|
2013-10-15 02:06:14 +02:00
|
|
|
set_curl_keepalive(result);
|
2013-10-13 00:29:40 +02:00
|
|
|
|
2005-11-19 02:06:46 +01:00
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
http_init(): Fix config file parsing
We honor the command line options, environment variables, variables in
repository configuration file, variables in user's global configuration
file, variables in the system configuration file, and then finally use
built-in default. To implement this semantics, the code should:
- start from built-in default values;
- call git_config() with the configuration parser callback, which
implements "later definition overrides earlier ones" logic
(git_config() reads the system's, user's and then repository's
configuration file in this order);
- override the result from the above with environment variables if set;
- override the result from the above with command line options.
The initialization code http_init() for http transfer got this wrong, and
implemented a "first one wins, ignoring the later ones" in http_options(),
to compensate this mistake, read environment variables before calling
git_config(). This is all wrong.
As a second class citizen, the http codepath hasn't been audited as
closely as other parts of the system, but we should try to bring sanity to
it, before inviting contributors to improve on it.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-10 03:00:30 +01:00
|
|
|
static void set_from_env(const char **var, const char *envname)
|
|
|
|
{
|
|
|
|
const char *val = getenv(envname);
|
|
|
|
if (val)
|
|
|
|
*var = val;
|
|
|
|
}
|
|
|
|
|
2011-12-14 01:11:56 +01:00
|
|
|
void http_init(struct remote *remote, const char *url, int proactive_auth)
|
2005-11-18 20:02:58 +01:00
|
|
|
{
|
|
|
|
char *low_speed_limit;
|
|
|
|
char *low_speed_time;
|
2013-08-05 22:20:36 +02:00
|
|
|
char *normalized_url;
|
|
|
|
struct urlmatch_config config = { STRING_LIST_INIT_DUP };
|
|
|
|
|
|
|
|
config.section = "http";
|
|
|
|
config.key = NULL;
|
|
|
|
config.collect_fn = http_options;
|
|
|
|
config.cascade_fn = git_default_config;
|
|
|
|
config.cb = NULL;
|
2005-11-18 20:02:58 +01:00
|
|
|
|
2009-06-06 10:43:41 +02:00
|
|
|
http_is_verbose = 0;
|
2013-08-05 22:20:36 +02:00
|
|
|
normalized_url = url_normalize(url, &config.url);
|
2009-06-06 10:43:41 +02:00
|
|
|
|
2013-08-05 22:20:36 +02:00
|
|
|
git_config(urlmatch_config_entry, &config);
|
|
|
|
free(normalized_url);
|
http_init(): Fix config file parsing
We honor the command line options, environment variables, variables in
repository configuration file, variables in user's global configuration
file, variables in the system configuration file, and then finally use
built-in default. To implement this semantics, the code should:
- start from built-in default values;
- call git_config() with the configuration parser callback, which
implements "later definition overrides earlier ones" logic
(git_config() reads the system's, user's and then repository's
configuration file in this order);
- override the result from the above with environment variables if set;
- override the result from the above with command line options.
The initialization code http_init() for http transfer got this wrong, and
implemented a "first one wins, ignoring the later ones" in http_options(),
to compensate this mistake, read environment variables before calling
git_config(). This is all wrong.
As a second class citizen, the http codepath hasn't been audited as
closely as other parts of the system, but we should try to bring sanity to
it, before inviting contributors to improve on it.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-10 03:00:30 +01:00
|
|
|
|
2014-08-13 19:31:24 +02:00
|
|
|
if (curl_global_init(CURL_GLOBAL_ALL) != CURLE_OK)
|
|
|
|
die("curl_global_init failed");
|
2005-11-18 20:02:58 +01:00
|
|
|
|
2011-12-14 01:11:56 +01:00
|
|
|
http_proactive_auth = proactive_auth;
|
|
|
|
|
2008-02-27 21:35:50 +01:00
|
|
|
if (remote && remote->http_proxy)
|
|
|
|
curl_http_proxy = xstrdup(remote->http_proxy);
|
|
|
|
|
2016-01-26 14:02:47 +01:00
|
|
|
if (remote)
|
|
|
|
var_override(&http_proxy_authmethod, remote->http_proxy_authmethod);
|
|
|
|
|
2016-04-27 14:20:37 +02:00
|
|
|
pragma_header = curl_slist_append(http_copy_default_headers(),
|
|
|
|
"Pragma: no-cache");
|
|
|
|
no_pragma_header = curl_slist_append(http_copy_default_headers(),
|
|
|
|
"Pragma:");
|
2005-11-18 20:02:58 +01:00
|
|
|
|
|
|
|
#ifdef USE_CURL_MULTI
|
|
|
|
{
|
|
|
|
char *http_max_requests = getenv("GIT_HTTP_MAX_REQUESTS");
|
|
|
|
if (http_max_requests != NULL)
|
|
|
|
max_requests = atoi(http_max_requests);
|
|
|
|
}
|
|
|
|
|
|
|
|
curlm = curl_multi_init();
|
2014-08-17 09:35:53 +02:00
|
|
|
if (!curlm)
|
|
|
|
die("curl_multi_init failed");
|
2005-11-18 20:02:58 +01:00
|
|
|
#endif
|
|
|
|
|
|
|
|
if (getenv("GIT_SSL_NO_VERIFY"))
|
|
|
|
curl_ssl_verify = 0;
|
|
|
|
|
http_init(): Fix config file parsing
We honor the command line options, environment variables, variables in
repository configuration file, variables in user's global configuration
file, variables in the system configuration file, and then finally use
built-in default. To implement this semantics, the code should:
- start from built-in default values;
- call git_config() with the configuration parser callback, which
implements "later definition overrides earlier ones" logic
(git_config() reads the system's, user's and then repository's
configuration file in this order);
- override the result from the above with environment variables if set;
- override the result from the above with command line options.
The initialization code http_init() for http transfer got this wrong, and
implemented a "first one wins, ignoring the later ones" in http_options(),
to compensate this mistake, read environment variables before calling
git_config(). This is all wrong.
As a second class citizen, the http codepath hasn't been audited as
closely as other parts of the system, but we should try to bring sanity to
it, before inviting contributors to improve on it.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-10 03:00:30 +01:00
|
|
|
set_from_env(&ssl_cert, "GIT_SSL_CERT");
|
2009-06-15 04:39:00 +02:00
|
|
|
#if LIBCURL_VERSION_NUM >= 0x070903
|
http_init(): Fix config file parsing
We honor the command line options, environment variables, variables in
repository configuration file, variables in user's global configuration
file, variables in the system configuration file, and then finally use
built-in default. To implement this semantics, the code should:
- start from built-in default values;
- call git_config() with the configuration parser callback, which
implements "later definition overrides earlier ones" logic
(git_config() reads the system's, user's and then repository's
configuration file in this order);
- override the result from the above with environment variables if set;
- override the result from the above with command line options.
The initialization code http_init() for http transfer got this wrong, and
implemented a "first one wins, ignoring the later ones" in http_options(),
to compensate this mistake, read environment variables before calling
git_config(). This is all wrong.
As a second class citizen, the http codepath hasn't been audited as
closely as other parts of the system, but we should try to bring sanity to
it, before inviting contributors to improve on it.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-10 03:00:30 +01:00
|
|
|
set_from_env(&ssl_key, "GIT_SSL_KEY");
|
2005-11-18 20:02:58 +01:00
|
|
|
#endif
|
|
|
|
#if LIBCURL_VERSION_NUM >= 0x070908
|
http_init(): Fix config file parsing
We honor the command line options, environment variables, variables in
repository configuration file, variables in user's global configuration
file, variables in the system configuration file, and then finally use
built-in default. To implement this semantics, the code should:
- start from built-in default values;
- call git_config() with the configuration parser callback, which
implements "later definition overrides earlier ones" logic
(git_config() reads the system's, user's and then repository's
configuration file in this order);
- override the result from the above with environment variables if set;
- override the result from the above with command line options.
The initialization code http_init() for http transfer got this wrong, and
implemented a "first one wins, ignoring the later ones" in http_options(),
to compensate this mistake, read environment variables before calling
git_config(). This is all wrong.
As a second class citizen, the http codepath hasn't been audited as
closely as other parts of the system, but we should try to bring sanity to
it, before inviting contributors to improve on it.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-10 03:00:30 +01:00
|
|
|
set_from_env(&ssl_capath, "GIT_SSL_CAPATH");
|
2005-11-18 20:02:58 +01:00
|
|
|
#endif
|
http_init(): Fix config file parsing
We honor the command line options, environment variables, variables in
repository configuration file, variables in user's global configuration
file, variables in the system configuration file, and then finally use
built-in default. To implement this semantics, the code should:
- start from built-in default values;
- call git_config() with the configuration parser callback, which
implements "later definition overrides earlier ones" logic
(git_config() reads the system's, user's and then repository's
configuration file in this order);
- override the result from the above with environment variables if set;
- override the result from the above with command line options.
The initialization code http_init() for http transfer got this wrong, and
implemented a "first one wins, ignoring the later ones" in http_options(),
to compensate this mistake, read environment variables before calling
git_config(). This is all wrong.
As a second class citizen, the http codepath hasn't been audited as
closely as other parts of the system, but we should try to bring sanity to
it, before inviting contributors to improve on it.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-10 03:00:30 +01:00
|
|
|
set_from_env(&ssl_cainfo, "GIT_SSL_CAINFO");
|
2005-11-18 20:02:58 +01:00
|
|
|
|
2010-08-11 22:40:38 +02:00
|
|
|
set_from_env(&user_agent, "GIT_HTTP_USER_AGENT");
|
|
|
|
|
2005-11-18 20:02:58 +01:00
|
|
|
low_speed_limit = getenv("GIT_HTTP_LOW_SPEED_LIMIT");
|
|
|
|
if (low_speed_limit != NULL)
|
|
|
|
curl_low_speed_limit = strtol(low_speed_limit, NULL, 10);
|
|
|
|
low_speed_time = getenv("GIT_HTTP_LOW_SPEED_TIME");
|
|
|
|
if (low_speed_time != NULL)
|
|
|
|
curl_low_speed_time = strtol(low_speed_time, NULL, 10);
|
|
|
|
|
|
|
|
if (curl_ssl_verify == -1)
|
|
|
|
curl_ssl_verify = 1;
|
|
|
|
|
2009-11-27 16:42:26 +01:00
|
|
|
curl_session_count = 0;
|
2005-11-18 20:02:58 +01:00
|
|
|
#ifdef USE_CURL_MULTI
|
|
|
|
if (max_requests < 1)
|
|
|
|
max_requests = DEFAULT_MAX_REQUESTS;
|
|
|
|
#endif
|
|
|
|
|
2006-09-29 02:10:44 +02:00
|
|
|
if (getenv("GIT_CURL_FTP_NO_EPSV"))
|
|
|
|
curl_ftp_no_epsv = 1;
|
|
|
|
|
2011-10-14 09:40:40 +02:00
|
|
|
if (url) {
|
http: use credential API to get passwords
This patch converts the http code to use the new credential
API, both for http authentication as well as for getting
certificate passwords.
Most of the code change is simply variable naming (the
passwords are now contained inside the credential struct)
or deletion of obsolete code (the credential code handles
URL parsing and prompting for us).
The behavior should be the same, with one exception: the
credential code will prompt with a description based on the
credential components. Therefore, the old prompt of:
Username for 'example.com':
Password for 'example.com':
now looks like:
Username for 'https://example.com/repo.git':
Password for 'https://user@example.com/repo.git':
Note that we include more information in each line,
specifically:
1. We now include the protocol. While more noisy, this is
an important part of knowing what you are accessing
(especially if you care about http vs https).
2. We include the username in the password prompt. This is
not a big deal when you have just been prompted for it,
but the username may also come from the remote's URL
(and after future patches, from configuration or
credential helpers). In that case, it's a nice
reminder of the user for which you're giving the
password.
3. We include the path component of the URL. In many
cases, the user won't care about this and it's simply
noise (i.e., they'll use the same credential for a
whole site). However, that is part of a larger
question, which is whether path components should be
part of credential context, both for prompting and for
lookup by storage helpers. That issue will be addressed
as a whole in a future patch.
Similarly, for unlocking certificates, we used to say:
Certificate Password for 'example.com':
and we now say:
Password for 'cert:///path/to/certificate':
Showing the path to the client certificate makes more sense,
as that is what you are unlocking, not "example.com".
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-12-10 11:31:21 +01:00
|
|
|
credential_from_url(&http_auth, url);
|
2009-05-28 05:16:03 +02:00
|
|
|
if (!ssl_cert_password_required &&
|
|
|
|
getenv("GIT_SSL_CERT_PASSWORD_PROTECTED") &&
|
2013-11-30 21:55:40 +01:00
|
|
|
starts_with(url, "https://"))
|
2009-05-28 05:16:02 +02:00
|
|
|
ssl_cert_password_required = 1;
|
|
|
|
}
|
2009-03-10 07:34:25 +01:00
|
|
|
|
2005-11-18 20:02:58 +01:00
|
|
|
#ifndef NO_CURL_EASY_DUPHANDLE
|
|
|
|
curl_default = get_curl_handle();
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
void http_cleanup(void)
|
|
|
|
{
|
|
|
|
struct active_request_slot *slot = active_queue_head;
|
|
|
|
|
|
|
|
while (slot != NULL) {
|
2007-09-15 09:23:00 +02:00
|
|
|
struct active_request_slot *next = slot->next;
|
2008-03-03 20:30:16 +01:00
|
|
|
if (slot->curl != NULL) {
|
2005-11-18 20:02:58 +01:00
|
|
|
#ifdef USE_CURL_MULTI
|
2008-03-03 20:30:16 +01:00
|
|
|
curl_multi_remove_handle(curlm, slot->curl);
|
2005-11-18 20:02:58 +01:00
|
|
|
#endif
|
|
|
|
curl_easy_cleanup(slot->curl);
|
2008-03-03 20:30:16 +01:00
|
|
|
}
|
2007-09-15 09:23:00 +02:00
|
|
|
free(slot);
|
|
|
|
slot = next;
|
2005-11-18 20:02:58 +01:00
|
|
|
}
|
2007-09-15 09:23:00 +02:00
|
|
|
active_queue_head = NULL;
|
2005-11-18 20:02:58 +01:00
|
|
|
|
|
|
|
#ifndef NO_CURL_EASY_DUPHANDLE
|
|
|
|
curl_easy_cleanup(curl_default);
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#ifdef USE_CURL_MULTI
|
|
|
|
curl_multi_cleanup(curlm);
|
|
|
|
#endif
|
|
|
|
curl_global_cleanup();
|
2006-06-06 18:41:32 +02:00
|
|
|
|
2016-04-27 14:20:37 +02:00
|
|
|
curl_slist_free_all(extra_http_headers);
|
|
|
|
extra_http_headers = NULL;
|
|
|
|
|
2006-06-06 18:41:32 +02:00
|
|
|
curl_slist_free_all(pragma_header);
|
2007-09-15 09:23:00 +02:00
|
|
|
pragma_header = NULL;
|
2008-02-27 21:35:50 +01:00
|
|
|
|
2009-06-06 10:43:41 +02:00
|
|
|
curl_slist_free_all(no_pragma_header);
|
|
|
|
no_pragma_header = NULL;
|
|
|
|
|
2008-02-27 21:35:50 +01:00
|
|
|
if (curl_http_proxy) {
|
2008-12-07 01:45:37 +01:00
|
|
|
free((void *)curl_http_proxy);
|
2008-02-27 21:35:50 +01:00
|
|
|
curl_http_proxy = NULL;
|
|
|
|
}
|
2009-05-28 05:16:02 +02:00
|
|
|
|
http: use credential API to handle proxy authentication
Currently, the only way to pass proxy credentials to curl is by including them
in the proxy URL. Usually, this means they will end up on disk unencrypted, one
way or another (by inclusion in ~/.gitconfig, shell profile or history). Since
proxy authentication often uses a domain user, credentials can be security
sensitive; therefore, a safer way of passing credentials is desirable.
If the configured proxy contains a username but not a password, query the
credential API for one. Also, make sure we approve/reject proxy credentials
properly.
For consistency reasons, add parsing of http_proxy/https_proxy/all_proxy
environment variables, which would otherwise be evaluated as a fallback by curl.
Without this, we would have different semantics for git configuration and
environment variables.
Helped-by: Junio C Hamano <gitster@pobox.com>
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Helped-by: Elia Pinto <gitter.spiros@gmail.com>
Signed-off-by: Knut Franke <k.franke@science-computing.de>
Signed-off-by: Elia Pinto <gitter.spiros@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-01-26 14:02:48 +01:00
|
|
|
if (proxy_auth.password) {
|
|
|
|
memset(proxy_auth.password, 0, strlen(proxy_auth.password));
|
|
|
|
free(proxy_auth.password);
|
|
|
|
proxy_auth.password = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
free((void *)curl_proxyuserpwd);
|
|
|
|
curl_proxyuserpwd = NULL;
|
|
|
|
|
2016-01-26 14:02:47 +01:00
|
|
|
free((void *)http_proxy_authmethod);
|
|
|
|
http_proxy_authmethod = NULL;
|
|
|
|
|
http: use credential API to get passwords
This patch converts the http code to use the new credential
API, both for http authentication as well as for getting
certificate passwords.
Most of the code change is simply variable naming (the
passwords are now contained inside the credential struct)
or deletion of obsolete code (the credential code handles
URL parsing and prompting for us).
The behavior should be the same, with one exception: the
credential code will prompt with a description based on the
credential components. Therefore, the old prompt of:
Username for 'example.com':
Password for 'example.com':
now looks like:
Username for 'https://example.com/repo.git':
Password for 'https://user@example.com/repo.git':
Note that we include more information in each line,
specifically:
1. We now include the protocol. While more noisy, this is
an important part of knowing what you are accessing
(especially if you care about http vs https).
2. We include the username in the password prompt. This is
not a big deal when you have just been prompted for it,
but the username may also come from the remote's URL
(and after future patches, from configuration or
credential helpers). In that case, it's a nice
reminder of the user for which you're giving the
password.
3. We include the path component of the URL. In many
cases, the user won't care about this and it's simply
noise (i.e., they'll use the same credential for a
whole site). However, that is part of a larger
question, which is whether path components should be
part of credential context, both for prompting and for
lookup by storage helpers. That issue will be addressed
as a whole in a future patch.
Similarly, for unlocking certificates, we used to say:
Certificate Password for 'example.com':
and we now say:
Password for 'cert:///path/to/certificate':
Showing the path to the client certificate makes more sense,
as that is what you are unlocking, not "example.com".
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-12-10 11:31:21 +01:00
|
|
|
if (cert_auth.password != NULL) {
|
|
|
|
memset(cert_auth.password, 0, strlen(cert_auth.password));
|
|
|
|
free(cert_auth.password);
|
|
|
|
cert_auth.password = NULL;
|
2009-05-28 05:16:02 +02:00
|
|
|
}
|
|
|
|
ssl_cert_password_required = 0;
|
2015-01-28 13:04:37 +01:00
|
|
|
|
|
|
|
free(cached_accept_language);
|
|
|
|
cached_accept_language = NULL;
|
2005-11-18 20:02:58 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
struct active_request_slot *get_active_slot(void)
|
|
|
|
{
|
|
|
|
struct active_request_slot *slot = active_queue_head;
|
|
|
|
struct active_request_slot *newslot;
|
|
|
|
|
|
|
|
#ifdef USE_CURL_MULTI
|
|
|
|
int num_transfers;
|
|
|
|
|
|
|
|
/* Wait for a slot to open up if the queue is full */
|
|
|
|
while (active_requests >= max_requests) {
|
|
|
|
curl_multi_perform(curlm, &num_transfers);
|
2009-03-10 02:47:29 +01:00
|
|
|
if (num_transfers < active_requests)
|
2005-11-18 20:02:58 +01:00
|
|
|
process_curl_messages();
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2009-03-10 02:47:29 +01:00
|
|
|
while (slot != NULL && slot->in_use)
|
2005-11-18 20:02:58 +01:00
|
|
|
slot = slot->next;
|
2009-03-10 02:47:29 +01:00
|
|
|
|
2005-11-18 20:02:58 +01:00
|
|
|
if (slot == NULL) {
|
|
|
|
newslot = xmalloc(sizeof(*newslot));
|
|
|
|
newslot->curl = NULL;
|
|
|
|
newslot->in_use = 0;
|
|
|
|
newslot->next = NULL;
|
|
|
|
|
|
|
|
slot = active_queue_head;
|
|
|
|
if (slot == NULL) {
|
|
|
|
active_queue_head = newslot;
|
|
|
|
} else {
|
2009-03-10 02:47:29 +01:00
|
|
|
while (slot->next != NULL)
|
2005-11-18 20:02:58 +01:00
|
|
|
slot = slot->next;
|
|
|
|
slot->next = newslot;
|
|
|
|
}
|
|
|
|
slot = newslot;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (slot->curl == NULL) {
|
|
|
|
#ifdef NO_CURL_EASY_DUPHANDLE
|
|
|
|
slot->curl = get_curl_handle();
|
|
|
|
#else
|
|
|
|
slot->curl = curl_easy_duphandle(curl_default);
|
|
|
|
#endif
|
2009-11-27 16:42:26 +01:00
|
|
|
curl_session_count++;
|
2005-11-18 20:02:58 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
active_requests++;
|
|
|
|
slot->in_use = 1;
|
2006-01-31 20:06:55 +01:00
|
|
|
slot->results = NULL;
|
2006-03-11 05:18:01 +01:00
|
|
|
slot->finished = NULL;
|
2005-11-18 20:02:58 +01:00
|
|
|
slot->callback_data = NULL;
|
|
|
|
slot->callback_func = NULL;
|
2011-06-02 22:31:25 +02:00
|
|
|
curl_easy_setopt(slot->curl, CURLOPT_COOKIEFILE, curl_cookie_file);
|
2013-07-24 00:40:17 +02:00
|
|
|
if (curl_save_cookies)
|
|
|
|
curl_easy_setopt(slot->curl, CURLOPT_COOKIEJAR, curl_cookie_file);
|
2005-11-18 20:02:58 +01:00
|
|
|
curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, pragma_header);
|
|
|
|
curl_easy_setopt(slot->curl, CURLOPT_ERRORBUFFER, curl_errorstr);
|
2006-06-01 01:25:03 +02:00
|
|
|
curl_easy_setopt(slot->curl, CURLOPT_CUSTOMREQUEST, NULL);
|
|
|
|
curl_easy_setopt(slot->curl, CURLOPT_READFUNCTION, NULL);
|
|
|
|
curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, NULL);
|
2011-04-26 17:04:49 +02:00
|
|
|
curl_easy_setopt(slot->curl, CURLOPT_POSTFIELDS, NULL);
|
2006-06-01 01:25:03 +02:00
|
|
|
curl_easy_setopt(slot->curl, CURLOPT_UPLOAD, 0);
|
|
|
|
curl_easy_setopt(slot->curl, CURLOPT_HTTPGET, 1);
|
http: set curl FAILONERROR each time we select a handle
Because we reuse curl handles for multiple requests, the
setup of a handle happens in two stages: stable, global
setup and per-request setup. The lifecycle of a handle is
something like:
1. get_curl_handle; do basic global setup that will last
through the whole program (e.g., setting the user
agent, ssl options, etc)
2. get_active_slot; set up a per-request baseline (e.g.,
clearing the read/write functions, making it a GET
request, etc)
3. perform the request with curl_*_perform functions
4. goto step 2 to perform another request
Breaking it down this way means we can avoid doing global
setup from step (1) repeatedly, but we still finish step (2)
with a predictable baseline setup that callers can rely on.
Until commit 6d052d7 (http: add HTTP_KEEP_ERROR option,
2013-04-05), setting curl's FAILONERROR option was a global
setup; we never changed it. However, 6d052d7 introduced an
option where some requests might turn off FAILONERROR. Later
requests using the same handle would have the option
unexpectedly turned off, which meant they would not notice
http failures at all.
This could easily be seen in the test-suite for the
"half-auth" cases of t5541 and t5551. The initial requests
turned off FAILONERROR, which meant it was erroneously off
for the rpc POST. That worked fine for a successful request,
but meant that we failed to react properly to the HTTP 401
(instead, we treated whatever the server handed us as a
successful message body).
The solution is simple: now that FAILONERROR is a
per-request setting, we move it to get_active_slot to make
sure it is reset for each request.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-16 02:30:38 +02:00
|
|
|
curl_easy_setopt(slot->curl, CURLOPT_FAILONERROR, 1);
|
2015-11-02 22:39:58 +01:00
|
|
|
curl_easy_setopt(slot->curl, CURLOPT_RANGE, NULL);
|
2016-02-03 05:09:14 +01:00
|
|
|
|
|
|
|
#if LIBCURL_VERSION_NUM >= 0x070a08
|
|
|
|
curl_easy_setopt(slot->curl, CURLOPT_IPRESOLVE, git_curl_ipresolve);
|
|
|
|
#endif
|
2015-01-08 01:29:20 +01:00
|
|
|
#ifdef LIBCURL_CAN_HANDLE_AUTH_ANY
|
|
|
|
curl_easy_setopt(slot->curl, CURLOPT_HTTPAUTH, http_auth_methods);
|
|
|
|
#endif
|
2016-02-15 19:44:46 +01:00
|
|
|
if (http_auth.password || curl_empty_auth)
|
2012-04-10 11:53:40 +02:00
|
|
|
init_curl_http_auth(slot->curl);
|
2005-11-18 20:02:58 +01:00
|
|
|
|
|
|
|
return slot;
|
|
|
|
}
|
|
|
|
|
|
|
|
int start_active_slot(struct active_request_slot *slot)
|
|
|
|
{
|
|
|
|
#ifdef USE_CURL_MULTI
|
|
|
|
CURLMcode curlm_result = curl_multi_add_handle(curlm, slot->curl);
|
2007-09-11 05:02:28 +02:00
|
|
|
int num_transfers;
|
2005-11-18 20:02:58 +01:00
|
|
|
|
|
|
|
if (curlm_result != CURLM_OK &&
|
|
|
|
curlm_result != CURLM_CALL_MULTI_PERFORM) {
|
|
|
|
active_requests--;
|
|
|
|
slot->in_use = 0;
|
|
|
|
return 0;
|
|
|
|
}
|
2007-09-11 05:02:28 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We know there must be something to do, since we just added
|
|
|
|
* something.
|
|
|
|
*/
|
|
|
|
curl_multi_perform(curlm, &num_transfers);
|
2005-11-18 20:02:58 +01:00
|
|
|
#endif
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef USE_CURL_MULTI
|
2007-09-11 05:02:34 +02:00
|
|
|
struct fill_chain {
|
|
|
|
void *data;
|
|
|
|
int (*fill)(void *);
|
|
|
|
struct fill_chain *next;
|
|
|
|
};
|
|
|
|
|
2009-03-10 02:47:29 +01:00
|
|
|
static struct fill_chain *fill_cfg;
|
2007-09-11 05:02:34 +02:00
|
|
|
|
|
|
|
void add_fill_function(void *data, int (*fill)(void *))
|
|
|
|
{
|
2008-09-09 20:57:10 +02:00
|
|
|
struct fill_chain *new = xmalloc(sizeof(*new));
|
2007-09-11 05:02:34 +02:00
|
|
|
struct fill_chain **linkp = &fill_cfg;
|
|
|
|
new->data = data;
|
|
|
|
new->fill = fill;
|
|
|
|
new->next = NULL;
|
|
|
|
while (*linkp)
|
|
|
|
linkp = &(*linkp)->next;
|
|
|
|
*linkp = new;
|
|
|
|
}
|
|
|
|
|
2007-09-11 05:02:28 +02:00
|
|
|
void fill_active_slots(void)
|
|
|
|
{
|
|
|
|
struct active_request_slot *slot = active_queue_head;
|
|
|
|
|
2007-09-11 05:02:34 +02:00
|
|
|
while (active_requests < max_requests) {
|
|
|
|
struct fill_chain *fill;
|
|
|
|
for (fill = fill_cfg; fill; fill = fill->next)
|
|
|
|
if (fill->fill(fill->data))
|
|
|
|
break;
|
|
|
|
|
|
|
|
if (!fill)
|
2007-09-11 05:02:28 +02:00
|
|
|
break;
|
2007-09-11 05:02:34 +02:00
|
|
|
}
|
2007-09-11 05:02:28 +02:00
|
|
|
|
|
|
|
while (slot != NULL) {
|
2009-11-27 16:42:26 +01:00
|
|
|
if (!slot->in_use && slot->curl != NULL
|
|
|
|
&& curl_session_count > min_curl_sessions) {
|
2007-09-11 05:02:28 +02:00
|
|
|
curl_easy_cleanup(slot->curl);
|
|
|
|
slot->curl = NULL;
|
2009-11-27 16:42:26 +01:00
|
|
|
curl_session_count--;
|
2007-09-11 05:02:28 +02:00
|
|
|
}
|
|
|
|
slot = slot->next;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2005-11-18 20:02:58 +01:00
|
|
|
void step_active_slots(void)
|
|
|
|
{
|
|
|
|
int num_transfers;
|
|
|
|
CURLMcode curlm_result;
|
|
|
|
|
|
|
|
do {
|
|
|
|
curlm_result = curl_multi_perform(curlm, &num_transfers);
|
|
|
|
} while (curlm_result == CURLM_CALL_MULTI_PERFORM);
|
|
|
|
if (num_transfers < active_requests) {
|
|
|
|
process_curl_messages();
|
|
|
|
fill_active_slots();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
void run_active_slot(struct active_request_slot *slot)
|
|
|
|
{
|
|
|
|
#ifdef USE_CURL_MULTI
|
|
|
|
fd_set readfds;
|
|
|
|
fd_set writefds;
|
|
|
|
fd_set excfds;
|
|
|
|
int max_fd;
|
|
|
|
struct timeval select_timeout;
|
2006-03-11 05:18:01 +01:00
|
|
|
int finished = 0;
|
2005-11-18 20:02:58 +01:00
|
|
|
|
2006-03-11 05:18:01 +01:00
|
|
|
slot->finished = &finished;
|
|
|
|
while (!finished) {
|
2005-11-18 20:02:58 +01:00
|
|
|
step_active_slots();
|
|
|
|
|
2011-11-04 15:19:27 +01:00
|
|
|
if (slot->in_use) {
|
2011-11-04 15:19:26 +01:00
|
|
|
#if LIBCURL_VERSION_NUM >= 0x070f04
|
|
|
|
long curl_timeout;
|
|
|
|
curl_multi_timeout(curlm, &curl_timeout);
|
|
|
|
if (curl_timeout == 0) {
|
|
|
|
continue;
|
|
|
|
} else if (curl_timeout == -1) {
|
|
|
|
select_timeout.tv_sec = 0;
|
|
|
|
select_timeout.tv_usec = 50000;
|
|
|
|
} else {
|
|
|
|
select_timeout.tv_sec = curl_timeout / 1000;
|
|
|
|
select_timeout.tv_usec = (curl_timeout % 1000) * 1000;
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
select_timeout.tv_sec = 0;
|
|
|
|
select_timeout.tv_usec = 50000;
|
|
|
|
#endif
|
2005-11-18 20:02:58 +01:00
|
|
|
|
2011-11-04 15:19:25 +01:00
|
|
|
max_fd = -1;
|
2005-11-18 20:02:58 +01:00
|
|
|
FD_ZERO(&readfds);
|
|
|
|
FD_ZERO(&writefds);
|
|
|
|
FD_ZERO(&excfds);
|
2011-11-04 15:19:25 +01:00
|
|
|
curl_multi_fdset(curlm, &readfds, &writefds, &excfds, &max_fd);
|
2011-11-04 15:19:26 +01:00
|
|
|
|
2012-10-19 23:04:20 +02:00
|
|
|
/*
|
|
|
|
* It can happen that curl_multi_timeout returns a pathologically
|
|
|
|
* long timeout when curl_multi_fdset returns no file descriptors
|
|
|
|
* to read. See commit message for more details.
|
|
|
|
*/
|
|
|
|
if (max_fd < 0 &&
|
|
|
|
(select_timeout.tv_sec > 0 ||
|
|
|
|
select_timeout.tv_usec > 50000)) {
|
|
|
|
select_timeout.tv_sec = 0;
|
|
|
|
select_timeout.tv_usec = 50000;
|
|
|
|
}
|
|
|
|
|
2011-11-04 15:19:25 +01:00
|
|
|
select(max_fd+1, &readfds, &writefds, &excfds, &select_timeout);
|
2005-11-18 20:02:58 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
while (slot->in_use) {
|
|
|
|
slot->curl_result = curl_easy_perform(slot->curl);
|
|
|
|
finish_active_slot(slot);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2010-01-12 07:26:08 +01:00
|
|
|
static void release_active_slot(struct active_request_slot *slot)
|
2006-02-07 11:07:39 +01:00
|
|
|
{
|
|
|
|
closedown_active_slot(slot);
|
2009-11-27 16:42:26 +01:00
|
|
|
if (slot->curl && curl_session_count > min_curl_sessions) {
|
2006-06-06 18:41:32 +02:00
|
|
|
#ifdef USE_CURL_MULTI
|
2006-02-07 11:07:39 +01:00
|
|
|
curl_multi_remove_handle(curlm, slot->curl);
|
2006-06-06 18:41:32 +02:00
|
|
|
#endif
|
2006-02-07 11:07:39 +01:00
|
|
|
curl_easy_cleanup(slot->curl);
|
|
|
|
slot->curl = NULL;
|
2009-11-27 16:42:26 +01:00
|
|
|
curl_session_count--;
|
2006-02-07 11:07:39 +01:00
|
|
|
}
|
2006-06-06 18:41:32 +02:00
|
|
|
#ifdef USE_CURL_MULTI
|
2006-02-07 11:07:39 +01:00
|
|
|
fill_active_slots();
|
2006-06-06 18:41:32 +02:00
|
|
|
#endif
|
2006-02-07 11:07:39 +01:00
|
|
|
}
|
|
|
|
|
2005-11-18 20:02:58 +01:00
|
|
|
void finish_all_active_slots(void)
|
|
|
|
{
|
|
|
|
struct active_request_slot *slot = active_queue_head;
|
|
|
|
|
|
|
|
while (slot != NULL)
|
|
|
|
if (slot->in_use) {
|
|
|
|
run_active_slot(slot);
|
|
|
|
slot = active_queue_head;
|
|
|
|
} else {
|
|
|
|
slot = slot->next;
|
|
|
|
}
|
|
|
|
}
|
2007-12-11 00:08:25 +01:00
|
|
|
|
2009-06-06 10:43:43 +02:00
|
|
|
/* Helpers for modifying and creating URLs */
|
2007-12-11 00:08:25 +01:00
|
|
|
static inline int needs_quote(int ch)
|
|
|
|
{
|
|
|
|
if (((ch >= 'A') && (ch <= 'Z'))
|
|
|
|
|| ((ch >= 'a') && (ch <= 'z'))
|
|
|
|
|| ((ch >= '0') && (ch <= '9'))
|
|
|
|
|| (ch == '/')
|
|
|
|
|| (ch == '-')
|
|
|
|
|| (ch == '.'))
|
|
|
|
return 0;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static char *quote_ref_url(const char *base, const char *ref)
|
|
|
|
{
|
2009-03-07 17:47:21 +01:00
|
|
|
struct strbuf buf = STRBUF_INIT;
|
2007-12-11 00:08:25 +01:00
|
|
|
const char *cp;
|
2009-03-07 17:47:21 +01:00
|
|
|
int ch;
|
2007-12-11 00:08:25 +01:00
|
|
|
|
2009-06-06 10:43:43 +02:00
|
|
|
end_url_with_slash(&buf, base);
|
2009-03-07 17:47:21 +01:00
|
|
|
|
|
|
|
for (cp = ref; (ch = *cp) != 0; cp++)
|
2007-12-11 00:08:25 +01:00
|
|
|
if (needs_quote(ch))
|
2009-03-07 17:47:21 +01:00
|
|
|
strbuf_addf(&buf, "%%%02x", ch);
|
2007-12-11 00:08:25 +01:00
|
|
|
else
|
2009-03-07 17:47:21 +01:00
|
|
|
strbuf_addch(&buf, *cp);
|
2007-12-11 00:08:25 +01:00
|
|
|
|
2009-03-07 17:47:21 +01:00
|
|
|
return strbuf_detach(&buf, NULL);
|
2007-12-11 00:08:25 +01:00
|
|
|
}
|
|
|
|
|
http*: add helper methods for fetching objects (loose)
The code handling the fetching of loose objects in http-push.c and
http-walker.c have been refactored into new methods and a new struct
(object_http_request) in http.c. They are not meant to be invoked
elsewhere.
The new methods in http.c are
- new_http_object_request
- process_http_object_request
- finish_http_object_request
- abort_http_object_request
- release_http_object_request
and the new struct is http_object_request.
RANGER_HEADER_SIZE and no_pragma_header is no longer made available
outside of http.c, since after the above changes, there are no other
instances of usage outside of http.c.
Remove members of the transfer_request struct in http-push.c and
http-walker.c, including filename, real_sha1 and zret, as they are used
no longer used.
Move the methods append_remote_object_url() and get_remote_object_url()
from http-push.c to http.c. Additionally, get_remote_object_url() is no
longer defined only when USE_CURL_MULTI is defined, since
non-USE_CURL_MULTI code in http.c uses it (namely, in
new_http_object_request()).
Refactor code from http-push.c::start_fetch_loose() and
http-walker.c::start_object_fetch_request() that deals with the details
of coming up with the filename to store the retrieved object, resuming
a previously aborted request, and making a new curl request, into a new
function, new_http_object_request().
Refactor code from http-walker.c::process_object_request() into the
function, process_http_object_request().
Refactor code from http-push.c::finish_request() and
http-walker.c::finish_object_request() into a new function,
finish_http_object_request(). It returns the result of the
move_temp_to_file() invocation.
Add a function, release_http_object_request(), which cleans up object
request data. http-push.c and http-walker.c invoke this function
separately; http-push.c::release_request() and
http-walker.c::release_object_request() do not invoke this function.
Add a function, abort_http_object_request(), which unlink()s the object
file and invokes release_http_object_request(). Update
http-walker.c::abort_object_request() to use this.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 10:44:02 +02:00
|
|
|
void append_remote_object_url(struct strbuf *buf, const char *url,
|
|
|
|
const char *hex,
|
|
|
|
int only_two_digit_prefix)
|
|
|
|
{
|
2009-08-17 11:09:43 +02:00
|
|
|
end_url_with_slash(buf, url);
|
|
|
|
|
|
|
|
strbuf_addf(buf, "objects/%.*s/", 2, hex);
|
http*: add helper methods for fetching objects (loose)
The code handling the fetching of loose objects in http-push.c and
http-walker.c have been refactored into new methods and a new struct
(object_http_request) in http.c. They are not meant to be invoked
elsewhere.
The new methods in http.c are
- new_http_object_request
- process_http_object_request
- finish_http_object_request
- abort_http_object_request
- release_http_object_request
and the new struct is http_object_request.
RANGER_HEADER_SIZE and no_pragma_header is no longer made available
outside of http.c, since after the above changes, there are no other
instances of usage outside of http.c.
Remove members of the transfer_request struct in http-push.c and
http-walker.c, including filename, real_sha1 and zret, as they are used
no longer used.
Move the methods append_remote_object_url() and get_remote_object_url()
from http-push.c to http.c. Additionally, get_remote_object_url() is no
longer defined only when USE_CURL_MULTI is defined, since
non-USE_CURL_MULTI code in http.c uses it (namely, in
new_http_object_request()).
Refactor code from http-push.c::start_fetch_loose() and
http-walker.c::start_object_fetch_request() that deals with the details
of coming up with the filename to store the retrieved object, resuming
a previously aborted request, and making a new curl request, into a new
function, new_http_object_request().
Refactor code from http-walker.c::process_object_request() into the
function, process_http_object_request().
Refactor code from http-push.c::finish_request() and
http-walker.c::finish_object_request() into a new function,
finish_http_object_request(). It returns the result of the
move_temp_to_file() invocation.
Add a function, release_http_object_request(), which cleans up object
request data. http-push.c and http-walker.c invoke this function
separately; http-push.c::release_request() and
http-walker.c::release_object_request() do not invoke this function.
Add a function, abort_http_object_request(), which unlink()s the object
file and invokes release_http_object_request(). Update
http-walker.c::abort_object_request() to use this.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 10:44:02 +02:00
|
|
|
if (!only_two_digit_prefix)
|
|
|
|
strbuf_addf(buf, "%s", hex+2);
|
|
|
|
}
|
|
|
|
|
|
|
|
char *get_remote_object_url(const char *url, const char *hex,
|
|
|
|
int only_two_digit_prefix)
|
|
|
|
{
|
|
|
|
struct strbuf buf = STRBUF_INIT;
|
|
|
|
append_remote_object_url(&buf, url, hex, only_two_digit_prefix);
|
|
|
|
return strbuf_detach(&buf, NULL);
|
|
|
|
}
|
|
|
|
|
2015-01-15 00:40:46 +01:00
|
|
|
static int handle_curl_result(struct slot_results *results)
|
2012-08-27 15:26:04 +02:00
|
|
|
{
|
2013-04-06 00:14:06 +02:00
|
|
|
/*
|
|
|
|
* If we see a failing http code with CURLE_OK, we have turned off
|
|
|
|
* FAILONERROR (to keep the server's custom error response), and should
|
|
|
|
* translate the code into failure here.
|
|
|
|
*/
|
|
|
|
if (results->curl_result == CURLE_OK &&
|
|
|
|
results->http_code >= 400) {
|
|
|
|
results->curl_result = CURLE_HTTP_RETURNED_ERROR;
|
|
|
|
/*
|
|
|
|
* Normally curl will already have put the "reason phrase"
|
|
|
|
* from the server into curl_errorstr; unfortunately without
|
|
|
|
* FAILONERROR it is lost, so we can give only the numeric
|
|
|
|
* status code.
|
|
|
|
*/
|
|
|
|
snprintf(curl_errorstr, sizeof(curl_errorstr),
|
|
|
|
"The requested URL returned error: %ld",
|
|
|
|
results->http_code);
|
|
|
|
}
|
|
|
|
|
2012-08-27 15:26:04 +02:00
|
|
|
if (results->curl_result == CURLE_OK) {
|
|
|
|
credential_approve(&http_auth);
|
http: use credential API to handle proxy authentication
Currently, the only way to pass proxy credentials to curl is by including them
in the proxy URL. Usually, this means they will end up on disk unencrypted, one
way or another (by inclusion in ~/.gitconfig, shell profile or history). Since
proxy authentication often uses a domain user, credentials can be security
sensitive; therefore, a safer way of passing credentials is desirable.
If the configured proxy contains a username but not a password, query the
credential API for one. Also, make sure we approve/reject proxy credentials
properly.
For consistency reasons, add parsing of http_proxy/https_proxy/all_proxy
environment variables, which would otherwise be evaluated as a fallback by curl.
Without this, we would have different semantics for git configuration and
environment variables.
Helped-by: Junio C Hamano <gitster@pobox.com>
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Helped-by: Elia Pinto <gitter.spiros@gmail.com>
Signed-off-by: Knut Franke <k.franke@science-computing.de>
Signed-off-by: Elia Pinto <gitter.spiros@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-01-26 14:02:48 +01:00
|
|
|
if (proxy_auth.password)
|
|
|
|
credential_approve(&proxy_auth);
|
2012-08-27 15:26:04 +02:00
|
|
|
return HTTP_OK;
|
|
|
|
} else if (missing_target(results))
|
|
|
|
return HTTP_MISSING_TARGET;
|
|
|
|
else if (results->http_code == 401) {
|
|
|
|
if (http_auth.username && http_auth.password) {
|
|
|
|
credential_reject(&http_auth);
|
|
|
|
return HTTP_NOAUTH;
|
|
|
|
} else {
|
2015-01-08 01:29:20 +01:00
|
|
|
#ifdef LIBCURL_CAN_HANDLE_AUTH_ANY
|
|
|
|
http_auth_methods &= ~CURLAUTH_GSSNEGOTIATE;
|
|
|
|
#endif
|
2012-08-27 15:26:04 +02:00
|
|
|
return HTTP_REAUTH;
|
|
|
|
}
|
|
|
|
} else {
|
http: use credential API to handle proxy authentication
Currently, the only way to pass proxy credentials to curl is by including them
in the proxy URL. Usually, this means they will end up on disk unencrypted, one
way or another (by inclusion in ~/.gitconfig, shell profile or history). Since
proxy authentication often uses a domain user, credentials can be security
sensitive; therefore, a safer way of passing credentials is desirable.
If the configured proxy contains a username but not a password, query the
credential API for one. Also, make sure we approve/reject proxy credentials
properly.
For consistency reasons, add parsing of http_proxy/https_proxy/all_proxy
environment variables, which would otherwise be evaluated as a fallback by curl.
Without this, we would have different semantics for git configuration and
environment variables.
Helped-by: Junio C Hamano <gitster@pobox.com>
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Helped-by: Elia Pinto <gitter.spiros@gmail.com>
Signed-off-by: Knut Franke <k.franke@science-computing.de>
Signed-off-by: Elia Pinto <gitter.spiros@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-01-26 14:02:48 +01:00
|
|
|
if (results->http_connectcode == 407)
|
|
|
|
credential_reject(&proxy_auth);
|
2012-09-12 23:08:05 +02:00
|
|
|
#if LIBCURL_VERSION_NUM >= 0x070c00
|
2012-08-27 15:26:04 +02:00
|
|
|
if (!curl_errorstr[0])
|
|
|
|
strlcpy(curl_errorstr,
|
|
|
|
curl_easy_strerror(results->curl_result),
|
|
|
|
sizeof(curl_errorstr));
|
2012-09-12 23:08:05 +02:00
|
|
|
#endif
|
2012-08-27 15:26:04 +02:00
|
|
|
return HTTP_ERROR;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
http: never use curl_easy_perform
We currently don't reuse http connections when fetching via
the smart-http protocol. This is bad because the TCP
handshake introduces latency, and especially because SSL
connection setup may be non-trivial.
We can fix it by consistently using curl's "multi"
interface. The reason is rather complicated:
Our http code has two ways of being used: queuing many
"slots" to be fetched in parallel, or fetching a single
request in a blocking manner. The parallel code is built on
curl's "multi" interface. Most of the single-request code
uses http_request, which is built on top of the parallel
code (we just feed it one slot, and wait until it finishes).
However, one could also accomplish the single-request scheme
by avoiding curl's multi interface entirely and just using
curl_easy_perform. This is simpler, and is used by post_rpc
in the smart-http protocol.
It does work to use the same curl handle in both contexts,
as long as it is not at the same time. However, internally
curl may not share all of the cached resources between both
contexts. In particular, a connection formed using the
"multi" code will go into a reuse pool connected to the
"multi" object. Further requests using the "easy" interface
will not be able to reuse that connection.
The smart http protocol does ref discovery via http_request,
which uses the "multi" interface, and then follows up with
the "easy" interface for its rpc calls. As a result, we make
two HTTP connections rather than reusing a single one.
We could teach the ref discovery to use the "easy"
interface. But it is only once we have done this discovery
that we know whether the protocol will be smart or dumb. If
it is dumb, then our further requests, which want to fetch
objects in parallel, will not be able to reuse the same
connection.
Instead, this patch switches post_rpc to build on the
parallel interface, which means that we use it consistently
everywhere. It's a little more complicated to use, but since
we have the infrastructure already, it doesn't add any code;
we can just factor out the relevant bits from http_request.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-02-18 11:34:20 +01:00
|
|
|
int run_one_slot(struct active_request_slot *slot,
|
|
|
|
struct slot_results *results)
|
|
|
|
{
|
|
|
|
slot->results = results;
|
|
|
|
if (!start_active_slot(slot)) {
|
|
|
|
snprintf(curl_errorstr, sizeof(curl_errorstr),
|
|
|
|
"failed to start HTTP request");
|
|
|
|
return HTTP_START_FAILED;
|
|
|
|
}
|
|
|
|
|
|
|
|
run_active_slot(slot);
|
|
|
|
return handle_curl_result(results);
|
|
|
|
}
|
|
|
|
|
2016-04-27 14:20:37 +02:00
|
|
|
struct curl_slist *http_copy_default_headers(void)
|
|
|
|
{
|
|
|
|
struct curl_slist *headers = NULL, *h;
|
|
|
|
|
|
|
|
for (h = extra_http_headers; h; h = h->next)
|
|
|
|
headers = curl_slist_append(headers, h->data);
|
|
|
|
|
|
|
|
return headers;
|
|
|
|
}
|
|
|
|
|
2013-09-28 10:31:11 +02:00
|
|
|
static CURLcode curlinfo_strbuf(CURL *curl, CURLINFO info, struct strbuf *buf)
|
|
|
|
{
|
|
|
|
char *ptr;
|
|
|
|
CURLcode ret;
|
|
|
|
|
|
|
|
strbuf_reset(buf);
|
|
|
|
ret = curl_easy_getinfo(curl, info, &ptr);
|
|
|
|
if (!ret && ptr)
|
|
|
|
strbuf_addstr(buf, ptr);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2014-05-22 11:30:05 +02:00
|
|
|
/*
|
|
|
|
* Check for and extract a content-type parameter. "raw"
|
|
|
|
* should be positioned at the start of the potential
|
|
|
|
* parameter, with any whitespace already removed.
|
|
|
|
*
|
|
|
|
* "name" is the name of the parameter. The value is appended
|
|
|
|
* to "out".
|
|
|
|
*/
|
|
|
|
static int extract_param(const char *raw, const char *name,
|
|
|
|
struct strbuf *out)
|
|
|
|
{
|
|
|
|
size_t len = strlen(name);
|
|
|
|
|
|
|
|
if (strncasecmp(raw, name, len))
|
|
|
|
return -1;
|
|
|
|
raw += len;
|
|
|
|
|
|
|
|
if (*raw != '=')
|
|
|
|
return -1;
|
|
|
|
raw++;
|
|
|
|
|
2014-06-18 00:11:53 +02:00
|
|
|
while (*raw && !isspace(*raw) && *raw != ';')
|
2014-05-22 11:30:05 +02:00
|
|
|
strbuf_addch(out, *raw++);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-05-22 11:29:47 +02:00
|
|
|
/*
|
|
|
|
* Extract a normalized version of the content type, with any
|
|
|
|
* spaces suppressed, all letters lowercased, and no trailing ";"
|
|
|
|
* or parameters.
|
|
|
|
*
|
|
|
|
* Note that we will silently remove even invalid whitespace. For
|
|
|
|
* example, "text / plain" is specifically forbidden by RFC 2616,
|
|
|
|
* but "text/plain" is the only reasonable output, and this keeps
|
|
|
|
* our code simple.
|
|
|
|
*
|
2014-05-22 11:30:05 +02:00
|
|
|
* If the "charset" argument is not NULL, store the value of any
|
|
|
|
* charset parameter there.
|
|
|
|
*
|
2014-05-22 11:29:47 +02:00
|
|
|
* Example:
|
2014-05-22 11:30:05 +02:00
|
|
|
* "TEXT/PLAIN; charset=utf-8" -> "text/plain", "utf-8"
|
2014-05-22 11:29:47 +02:00
|
|
|
* "text / plain" -> "text/plain"
|
|
|
|
*/
|
2014-05-22 11:30:05 +02:00
|
|
|
static void extract_content_type(struct strbuf *raw, struct strbuf *type,
|
|
|
|
struct strbuf *charset)
|
2014-05-22 11:29:47 +02:00
|
|
|
{
|
|
|
|
const char *p;
|
|
|
|
|
|
|
|
strbuf_reset(type);
|
|
|
|
strbuf_grow(type, raw->len);
|
|
|
|
for (p = raw->buf; *p; p++) {
|
|
|
|
if (isspace(*p))
|
|
|
|
continue;
|
2014-05-22 11:30:05 +02:00
|
|
|
if (*p == ';') {
|
|
|
|
p++;
|
2014-05-22 11:29:47 +02:00
|
|
|
break;
|
2014-05-22 11:30:05 +02:00
|
|
|
}
|
2014-05-22 11:29:47 +02:00
|
|
|
strbuf_addch(type, tolower(*p));
|
|
|
|
}
|
2014-05-22 11:30:05 +02:00
|
|
|
|
|
|
|
if (!charset)
|
|
|
|
return;
|
|
|
|
|
|
|
|
strbuf_reset(charset);
|
|
|
|
while (*p) {
|
2014-06-18 00:11:53 +02:00
|
|
|
while (isspace(*p) || *p == ';')
|
2014-05-22 11:30:05 +02:00
|
|
|
p++;
|
|
|
|
if (!extract_param(p, "charset", charset))
|
|
|
|
return;
|
|
|
|
while (*p && !isspace(*p))
|
|
|
|
p++;
|
|
|
|
}
|
2014-05-22 11:36:12 +02:00
|
|
|
|
|
|
|
if (!charset->len && starts_with(type->buf, "text/"))
|
|
|
|
strbuf_addstr(charset, "ISO-8859-1");
|
2014-05-22 11:29:47 +02:00
|
|
|
}
|
|
|
|
|
2015-01-28 13:04:37 +01:00
|
|
|
static void write_accept_language(struct strbuf *buf)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* MAX_DECIMAL_PLACES must not be larger than 3. If it is larger than
|
|
|
|
* that, q-value will be smaller than 0.001, the minimum q-value the
|
|
|
|
* HTTP specification allows. See
|
|
|
|
* http://tools.ietf.org/html/rfc7231#section-5.3.1 for q-value.
|
|
|
|
*/
|
|
|
|
const int MAX_DECIMAL_PLACES = 3;
|
|
|
|
const int MAX_LANGUAGE_TAGS = 1000;
|
|
|
|
const int MAX_ACCEPT_LANGUAGE_HEADER_SIZE = 4000;
|
|
|
|
char **language_tags = NULL;
|
|
|
|
int num_langs = 0;
|
|
|
|
const char *s = get_preferred_languages();
|
|
|
|
int i;
|
|
|
|
struct strbuf tag = STRBUF_INIT;
|
|
|
|
|
|
|
|
/* Don't add Accept-Language header if no language is preferred. */
|
|
|
|
if (!s)
|
|
|
|
return;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Split the colon-separated string of preferred languages into
|
|
|
|
* language_tags array.
|
|
|
|
*/
|
|
|
|
do {
|
|
|
|
/* collect language tag */
|
|
|
|
for (; *s && (isalnum(*s) || *s == '_'); s++)
|
|
|
|
strbuf_addch(&tag, *s == '_' ? '-' : *s);
|
|
|
|
|
|
|
|
/* skip .codeset, @modifier and any other unnecessary parts */
|
|
|
|
while (*s && *s != ':')
|
|
|
|
s++;
|
|
|
|
|
|
|
|
if (tag.len) {
|
|
|
|
num_langs++;
|
|
|
|
REALLOC_ARRAY(language_tags, num_langs);
|
|
|
|
language_tags[num_langs - 1] = strbuf_detach(&tag, NULL);
|
|
|
|
if (num_langs >= MAX_LANGUAGE_TAGS - 1) /* -1 for '*' */
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
} while (*s++);
|
|
|
|
|
|
|
|
/* write Accept-Language header into buf */
|
|
|
|
if (num_langs) {
|
|
|
|
int last_buf_len = 0;
|
|
|
|
int max_q;
|
|
|
|
int decimal_places;
|
|
|
|
char q_format[32];
|
|
|
|
|
|
|
|
/* add '*' */
|
|
|
|
REALLOC_ARRAY(language_tags, num_langs + 1);
|
|
|
|
language_tags[num_langs++] = "*"; /* it's OK; this won't be freed */
|
|
|
|
|
|
|
|
/* compute decimal_places */
|
|
|
|
for (max_q = 1, decimal_places = 0;
|
|
|
|
max_q < num_langs && decimal_places <= MAX_DECIMAL_PLACES;
|
|
|
|
decimal_places++, max_q *= 10)
|
|
|
|
;
|
|
|
|
|
2015-09-24 23:06:08 +02:00
|
|
|
xsnprintf(q_format, sizeof(q_format), ";q=0.%%0%dd", decimal_places);
|
2015-01-28 13:04:37 +01:00
|
|
|
|
|
|
|
strbuf_addstr(buf, "Accept-Language: ");
|
|
|
|
|
|
|
|
for (i = 0; i < num_langs; i++) {
|
|
|
|
if (i > 0)
|
|
|
|
strbuf_addstr(buf, ", ");
|
|
|
|
|
|
|
|
strbuf_addstr(buf, language_tags[i]);
|
|
|
|
|
|
|
|
if (i > 0)
|
|
|
|
strbuf_addf(buf, q_format, max_q - i);
|
|
|
|
|
|
|
|
if (buf->len > MAX_ACCEPT_LANGUAGE_HEADER_SIZE) {
|
|
|
|
strbuf_remove(buf, last_buf_len, buf->len - last_buf_len);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
last_buf_len = buf->len;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* free language tags -- last one is a static '*' */
|
|
|
|
for (i = 0; i < num_langs - 1; i++)
|
|
|
|
free(language_tags[i]);
|
|
|
|
free(language_tags);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Get an Accept-Language header which indicates user's preferred languages.
|
|
|
|
*
|
|
|
|
* Examples:
|
|
|
|
* LANGUAGE= -> ""
|
|
|
|
* LANGUAGE=ko:en -> "Accept-Language: ko, en; q=0.9, *; q=0.1"
|
|
|
|
* LANGUAGE=ko_KR.UTF-8:sr@latin -> "Accept-Language: ko-KR, sr; q=0.9, *; q=0.1"
|
|
|
|
* LANGUAGE=ko LANG=en_US.UTF-8 -> "Accept-Language: ko, *; q=0.1"
|
|
|
|
* LANGUAGE= LANG=en_US.UTF-8 -> "Accept-Language: en-US, *; q=0.1"
|
|
|
|
* LANGUAGE= LANG=C -> ""
|
|
|
|
*/
|
|
|
|
static const char *get_accept_language(void)
|
|
|
|
{
|
|
|
|
if (!cached_accept_language) {
|
|
|
|
struct strbuf buf = STRBUF_INIT;
|
|
|
|
write_accept_language(&buf);
|
|
|
|
if (buf.len > 0)
|
|
|
|
cached_accept_language = strbuf_detach(&buf, NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
return cached_accept_language;
|
|
|
|
}
|
|
|
|
|
2015-11-02 22:39:58 +01:00
|
|
|
static void http_opt_request_remainder(CURL *curl, off_t pos)
|
|
|
|
{
|
|
|
|
char buf[128];
|
|
|
|
xsnprintf(buf, sizeof(buf), "%"PRIuMAX"-", (uintmax_t)pos);
|
|
|
|
curl_easy_setopt(curl, CURLOPT_RANGE, buf);
|
|
|
|
}
|
|
|
|
|
2009-06-06 10:43:53 +02:00
|
|
|
/* http_request() targets */
|
|
|
|
#define HTTP_REQUEST_STRBUF 0
|
|
|
|
#define HTTP_REQUEST_FILE 1
|
|
|
|
|
2013-09-28 10:31:23 +02:00
|
|
|
static int http_request(const char *url,
|
|
|
|
void *result, int target,
|
|
|
|
const struct http_get_options *options)
|
2009-06-06 10:43:53 +02:00
|
|
|
{
|
|
|
|
struct active_request_slot *slot;
|
|
|
|
struct slot_results results;
|
2016-04-27 14:20:37 +02:00
|
|
|
struct curl_slist *headers = http_copy_default_headers();
|
2009-06-06 10:43:53 +02:00
|
|
|
struct strbuf buf = STRBUF_INIT;
|
2015-01-28 13:04:37 +01:00
|
|
|
const char *accept_language;
|
2009-06-06 10:43:53 +02:00
|
|
|
int ret;
|
|
|
|
|
|
|
|
slot = get_active_slot();
|
|
|
|
curl_easy_setopt(slot->curl, CURLOPT_HTTPGET, 1);
|
|
|
|
|
|
|
|
if (result == NULL) {
|
|
|
|
curl_easy_setopt(slot->curl, CURLOPT_NOBODY, 1);
|
|
|
|
} else {
|
|
|
|
curl_easy_setopt(slot->curl, CURLOPT_NOBODY, 0);
|
|
|
|
curl_easy_setopt(slot->curl, CURLOPT_FILE, result);
|
|
|
|
|
|
|
|
if (target == HTTP_REQUEST_FILE) {
|
2015-11-02 23:10:27 +01:00
|
|
|
off_t posn = ftello(result);
|
2009-06-06 10:43:53 +02:00
|
|
|
curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION,
|
|
|
|
fwrite);
|
2015-11-02 22:39:58 +01:00
|
|
|
if (posn > 0)
|
|
|
|
http_opt_request_remainder(slot->curl, posn);
|
2009-06-06 10:43:53 +02:00
|
|
|
} else
|
|
|
|
curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION,
|
|
|
|
fwrite_buffer);
|
|
|
|
}
|
|
|
|
|
2015-01-28 13:04:37 +01:00
|
|
|
accept_language = get_accept_language();
|
|
|
|
|
|
|
|
if (accept_language)
|
|
|
|
headers = curl_slist_append(headers, accept_language);
|
|
|
|
|
2009-06-06 10:43:53 +02:00
|
|
|
strbuf_addstr(&buf, "Pragma:");
|
2013-09-28 10:31:23 +02:00
|
|
|
if (options && options->no_cache)
|
2009-06-06 10:43:53 +02:00
|
|
|
strbuf_addstr(&buf, " no-cache");
|
2013-09-28 10:31:23 +02:00
|
|
|
if (options && options->keep_error)
|
2013-04-06 00:14:06 +02:00
|
|
|
curl_easy_setopt(slot->curl, CURLOPT_FAILONERROR, 0);
|
2009-06-06 10:43:53 +02:00
|
|
|
|
|
|
|
headers = curl_slist_append(headers, buf.buf);
|
|
|
|
|
|
|
|
curl_easy_setopt(slot->curl, CURLOPT_URL, url);
|
|
|
|
curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, headers);
|
2012-09-20 01:12:02 +02:00
|
|
|
curl_easy_setopt(slot->curl, CURLOPT_ENCODING, "gzip");
|
2009-06-06 10:43:53 +02:00
|
|
|
|
http: never use curl_easy_perform
We currently don't reuse http connections when fetching via
the smart-http protocol. This is bad because the TCP
handshake introduces latency, and especially because SSL
connection setup may be non-trivial.
We can fix it by consistently using curl's "multi"
interface. The reason is rather complicated:
Our http code has two ways of being used: queuing many
"slots" to be fetched in parallel, or fetching a single
request in a blocking manner. The parallel code is built on
curl's "multi" interface. Most of the single-request code
uses http_request, which is built on top of the parallel
code (we just feed it one slot, and wait until it finishes).
However, one could also accomplish the single-request scheme
by avoiding curl's multi interface entirely and just using
curl_easy_perform. This is simpler, and is used by post_rpc
in the smart-http protocol.
It does work to use the same curl handle in both contexts,
as long as it is not at the same time. However, internally
curl may not share all of the cached resources between both
contexts. In particular, a connection formed using the
"multi" code will go into a reuse pool connected to the
"multi" object. Further requests using the "easy" interface
will not be able to reuse that connection.
The smart http protocol does ref discovery via http_request,
which uses the "multi" interface, and then follows up with
the "easy" interface for its rpc calls. As a result, we make
two HTTP connections rather than reusing a single one.
We could teach the ref discovery to use the "easy"
interface. But it is only once we have done this discovery
that we know whether the protocol will be smart or dumb. If
it is dumb, then our further requests, which want to fetch
objects in parallel, will not be able to reuse the same
connection.
Instead, this patch switches post_rpc to build on the
parallel interface, which means that we use it consistently
everywhere. It's a little more complicated to use, but since
we have the infrastructure already, it doesn't add any code;
we can just factor out the relevant bits from http_request.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-02-18 11:34:20 +01:00
|
|
|
ret = run_one_slot(slot, &results);
|
2009-06-06 10:43:53 +02:00
|
|
|
|
2014-05-22 11:29:47 +02:00
|
|
|
if (options && options->content_type) {
|
|
|
|
struct strbuf raw = STRBUF_INIT;
|
|
|
|
curlinfo_strbuf(slot->curl, CURLINFO_CONTENT_TYPE, &raw);
|
2014-05-22 11:30:05 +02:00
|
|
|
extract_content_type(&raw, options->content_type,
|
|
|
|
options->charset);
|
2014-05-22 11:29:47 +02:00
|
|
|
strbuf_release(&raw);
|
|
|
|
}
|
2013-01-31 22:02:07 +01:00
|
|
|
|
2013-09-28 10:32:02 +02:00
|
|
|
if (options && options->effective_url)
|
|
|
|
curlinfo_strbuf(slot->curl, CURLINFO_EFFECTIVE_URL,
|
|
|
|
options->effective_url);
|
2013-01-31 22:02:07 +01:00
|
|
|
|
2009-06-06 10:43:53 +02:00
|
|
|
curl_slist_free_all(headers);
|
|
|
|
strbuf_release(&buf);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
http: update base URLs when we see redirects
If a caller asks the http_get_* functions to go to a
particular URL and we end up elsewhere due to a redirect,
the effective_url field can tell us where we went.
It would be nice to remember this redirect and short-cut
further requests for two reasons:
1. It's more efficient. Otherwise we spend an extra http
round-trip to the server for each subsequent request,
just to get redirected.
2. If we end up with an http 401 and are going to ask for
credentials, it is to feed them to the redirect target.
If the redirect is an http->https upgrade, this means
our credentials may be provided on the http leg, just
to end up redirected to https. And if the redirect
crosses server boundaries, then curl will drop the
credentials entirely as it follows the redirect.
However, it, it is not enough to simply record the effective
URL we saw and use that for subsequent requests. We were
originally fed a "base" url like:
http://example.com/foo.git
and we want to figure out what the new base is, even though
the URLs we see may be:
original: http://example.com/foo.git/info/refs
effective: http://example.com/bar.git/info/refs
Subsequent requests will not be for "info/refs", but for
other paths relative to the base. We must ask the caller to
pass in the original base, and we must pass the redirected
base back to the caller (so that it can generate more URLs
from it). Furthermore, we need to feed the new base to the
credential code, so that requests to credential helpers (or
to the user) match the URL we will be requesting.
This patch teaches http_request_reauth to do this munging.
Since it is the caller who cares about making more URLs, it
seems at first glance that callers could simply check
effective_url themselves and handle it. However, since we
need to update the credential struct before the second
re-auth request, we have to do it inside http_request_reauth.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-09-28 10:34:05 +02:00
|
|
|
/*
|
|
|
|
* Update the "base" url to a more appropriate value, as deduced by
|
|
|
|
* redirects seen when requesting a URL starting with "url".
|
|
|
|
*
|
|
|
|
* The "asked" parameter is a URL that we asked curl to access, and must begin
|
|
|
|
* with "base".
|
|
|
|
*
|
|
|
|
* The "got" parameter is the URL that curl reported to us as where we ended
|
|
|
|
* up.
|
|
|
|
*
|
|
|
|
* Returns 1 if we updated the base url, 0 otherwise.
|
|
|
|
*
|
|
|
|
* Our basic strategy is to compare "base" and "asked" to find the bits
|
|
|
|
* specific to our request. We then strip those bits off of "got" to yield the
|
|
|
|
* new base. So for example, if our base is "http://example.com/foo.git",
|
|
|
|
* and we ask for "http://example.com/foo.git/info/refs", we might end up
|
|
|
|
* with "https://other.example.com/foo.git/info/refs". We would want the
|
|
|
|
* new URL to become "https://other.example.com/foo.git".
|
|
|
|
*
|
|
|
|
* Note that this assumes a sane redirect scheme. It's entirely possible
|
|
|
|
* in the example above to end up at a URL that does not even end in
|
|
|
|
* "info/refs". In such a case we simply punt, as there is not much we can
|
|
|
|
* do (and such a scheme is unlikely to represent a real git repository,
|
|
|
|
* which means we are likely about to abort anyway).
|
|
|
|
*/
|
|
|
|
static int update_url_from_redirect(struct strbuf *base,
|
|
|
|
const char *asked,
|
|
|
|
const struct strbuf *got)
|
|
|
|
{
|
|
|
|
const char *tail;
|
|
|
|
size_t tail_len;
|
|
|
|
|
|
|
|
if (!strcmp(asked, got->buf))
|
|
|
|
return 0;
|
|
|
|
|
2014-06-18 21:57:17 +02:00
|
|
|
if (!skip_prefix(asked, base->buf, &tail))
|
http: update base URLs when we see redirects
If a caller asks the http_get_* functions to go to a
particular URL and we end up elsewhere due to a redirect,
the effective_url field can tell us where we went.
It would be nice to remember this redirect and short-cut
further requests for two reasons:
1. It's more efficient. Otherwise we spend an extra http
round-trip to the server for each subsequent request,
just to get redirected.
2. If we end up with an http 401 and are going to ask for
credentials, it is to feed them to the redirect target.
If the redirect is an http->https upgrade, this means
our credentials may be provided on the http leg, just
to end up redirected to https. And if the redirect
crosses server boundaries, then curl will drop the
credentials entirely as it follows the redirect.
However, it, it is not enough to simply record the effective
URL we saw and use that for subsequent requests. We were
originally fed a "base" url like:
http://example.com/foo.git
and we want to figure out what the new base is, even though
the URLs we see may be:
original: http://example.com/foo.git/info/refs
effective: http://example.com/bar.git/info/refs
Subsequent requests will not be for "info/refs", but for
other paths relative to the base. We must ask the caller to
pass in the original base, and we must pass the redirected
base back to the caller (so that it can generate more URLs
from it). Furthermore, we need to feed the new base to the
credential code, so that requests to credential helpers (or
to the user) match the URL we will be requesting.
This patch teaches http_request_reauth to do this munging.
Since it is the caller who cares about making more URLs, it
seems at first glance that callers could simply check
effective_url themselves and handle it. However, since we
need to update the credential struct before the second
re-auth request, we have to do it inside http_request_reauth.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-09-28 10:34:05 +02:00
|
|
|
die("BUG: update_url_from_redirect: %s is not a superset of %s",
|
|
|
|
asked, base->buf);
|
|
|
|
|
|
|
|
tail_len = strlen(tail);
|
|
|
|
|
|
|
|
if (got->len < tail_len ||
|
|
|
|
strcmp(tail, got->buf + got->len - tail_len))
|
|
|
|
return 0; /* insane redirect scheme */
|
|
|
|
|
|
|
|
strbuf_reset(base);
|
|
|
|
strbuf_add(base, got->buf, got->len - tail_len);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2013-01-31 22:02:07 +01:00
|
|
|
static int http_request_reauth(const char *url,
|
|
|
|
void *result, int target,
|
2013-09-28 10:31:23 +02:00
|
|
|
struct http_get_options *options)
|
2011-07-18 09:50:14 +02:00
|
|
|
{
|
2013-09-28 10:31:23 +02:00
|
|
|
int ret = http_request(url, result, target, options);
|
http: update base URLs when we see redirects
If a caller asks the http_get_* functions to go to a
particular URL and we end up elsewhere due to a redirect,
the effective_url field can tell us where we went.
It would be nice to remember this redirect and short-cut
further requests for two reasons:
1. It's more efficient. Otherwise we spend an extra http
round-trip to the server for each subsequent request,
just to get redirected.
2. If we end up with an http 401 and are going to ask for
credentials, it is to feed them to the redirect target.
If the redirect is an http->https upgrade, this means
our credentials may be provided on the http leg, just
to end up redirected to https. And if the redirect
crosses server boundaries, then curl will drop the
credentials entirely as it follows the redirect.
However, it, it is not enough to simply record the effective
URL we saw and use that for subsequent requests. We were
originally fed a "base" url like:
http://example.com/foo.git
and we want to figure out what the new base is, even though
the URLs we see may be:
original: http://example.com/foo.git/info/refs
effective: http://example.com/bar.git/info/refs
Subsequent requests will not be for "info/refs", but for
other paths relative to the base. We must ask the caller to
pass in the original base, and we must pass the redirected
base back to the caller (so that it can generate more URLs
from it). Furthermore, we need to feed the new base to the
credential code, so that requests to credential helpers (or
to the user) match the URL we will be requesting.
This patch teaches http_request_reauth to do this munging.
Since it is the caller who cares about making more URLs, it
seems at first glance that callers could simply check
effective_url themselves and handle it. However, since we
need to update the credential struct before the second
re-auth request, we have to do it inside http_request_reauth.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-09-28 10:34:05 +02:00
|
|
|
|
|
|
|
if (options && options->effective_url && options->base_url) {
|
|
|
|
if (update_url_from_redirect(options->base_url,
|
|
|
|
url, options->effective_url)) {
|
|
|
|
credential_from_url(&http_auth, options->base_url->buf);
|
|
|
|
url = options->effective_url->buf;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-07-18 09:50:14 +02:00
|
|
|
if (ret != HTTP_REAUTH)
|
|
|
|
return ret;
|
2013-04-06 00:14:06 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If we are using KEEP_ERROR, the previous request may have
|
|
|
|
* put cruft into our output stream; we should clear it out before
|
|
|
|
* making our next request. We only know how to do this for
|
|
|
|
* the strbuf case, but that is enough to satisfy current callers.
|
|
|
|
*/
|
2013-09-28 10:31:23 +02:00
|
|
|
if (options && options->keep_error) {
|
2013-04-06 00:14:06 +02:00
|
|
|
switch (target) {
|
|
|
|
case HTTP_REQUEST_STRBUF:
|
|
|
|
strbuf_reset(result);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
die("BUG: HTTP_KEEP_ERROR is only supported with strbufs");
|
|
|
|
}
|
|
|
|
}
|
http: hoist credential request out of handle_curl_result
When we are handling a curl response code in http_request or
in the remote-curl RPC code, we use the handle_curl_result
helper to translate curl's response into an easy-to-use
code. When we see an HTTP 401, we do one of two things:
1. If we already had a filled-in credential, we mark it as
rejected, and then return HTTP_NOAUTH to indicate to
the caller that we failed.
2. If we didn't, then we ask for a new credential and tell
the caller HTTP_REAUTH to indicate that they may want
to try again.
Rejecting in the first case makes sense; it is the natural
result of the request we just made. However, prompting for
more credentials in the second step does not always make
sense. We do not know for sure that the caller is going to
make a second request, and nor are we sure that it will be
to the same URL. Logically, the prompt belongs not to the
request we just finished, but to the request we are (maybe)
about to make.
In practice, it is very hard to trigger any bad behavior.
Currently, if we make a second request, it will always be to
the same URL (even in the face of redirects, because curl
handles the redirects internally). And we almost always
retry on HTTP_REAUTH these days. The one exception is if we
are streaming a large RPC request to the server (e.g., a
pushed packfile), in which case we cannot restart. It's
extremely unlikely to see a 401 response at this stage,
though, as we would typically have seen it when we sent a
probe request, before streaming the data.
This patch drops the automatic prompt out of case 2, and
instead requires the caller to do it. This is a few extra
lines of code, and the bug it fixes is unlikely to come up
in practice. But it is conceptually cleaner, and paves the
way for better handling of credentials across redirects.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-09-28 10:31:45 +02:00
|
|
|
|
|
|
|
credential_fill(&http_auth);
|
|
|
|
|
2013-09-28 10:31:23 +02:00
|
|
|
return http_request(url, result, target, options);
|
2011-07-18 09:50:14 +02:00
|
|
|
}
|
|
|
|
|
2013-01-31 22:02:07 +01:00
|
|
|
int http_get_strbuf(const char *url,
|
2013-09-28 10:31:23 +02:00
|
|
|
struct strbuf *result,
|
|
|
|
struct http_get_options *options)
|
2009-06-06 10:43:53 +02:00
|
|
|
{
|
2013-09-28 10:31:23 +02:00
|
|
|
return http_request_reauth(url, result, HTTP_REQUEST_STRBUF, options);
|
2009-06-06 10:43:53 +02:00
|
|
|
}
|
|
|
|
|
2010-01-12 07:26:08 +01:00
|
|
|
/*
|
2012-03-28 10:41:54 +02:00
|
|
|
* Downloads a URL and stores the result in the given file.
|
2010-01-12 07:26:08 +01:00
|
|
|
*
|
|
|
|
* If a previous interrupted download is detected (i.e. a previous temporary
|
|
|
|
* file is still around) the download is resumed.
|
|
|
|
*/
|
2013-09-28 10:31:23 +02:00
|
|
|
static int http_get_file(const char *url, const char *filename,
|
|
|
|
struct http_get_options *options)
|
2009-06-06 10:43:53 +02:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
struct strbuf tmpfile = STRBUF_INIT;
|
|
|
|
FILE *result;
|
|
|
|
|
|
|
|
strbuf_addf(&tmpfile, "%s.temp", filename);
|
|
|
|
result = fopen(tmpfile.buf, "a");
|
2013-09-28 10:31:00 +02:00
|
|
|
if (!result) {
|
2009-06-06 10:43:53 +02:00
|
|
|
error("Unable to open local file %s", tmpfile.buf);
|
|
|
|
ret = HTTP_ERROR;
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
|
2013-09-28 10:31:23 +02:00
|
|
|
ret = http_request_reauth(url, result, HTTP_REQUEST_FILE, options);
|
2009-06-06 10:43:53 +02:00
|
|
|
fclose(result);
|
|
|
|
|
2015-08-07 23:40:24 +02:00
|
|
|
if (ret == HTTP_OK && finalize_object_file(tmpfile.buf, filename))
|
2009-06-06 10:43:53 +02:00
|
|
|
ret = HTTP_ERROR;
|
|
|
|
cleanup:
|
|
|
|
strbuf_release(&tmpfile);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
Make walker.fetch_ref() take a struct ref.
This simplifies a few things, makes a few things slightly more
complicated, but, more importantly, allows that, when struct ref can
represent a symref, http_fetch_ref() can return one.
Incidentally makes the string that http_fetch_ref() gets include "refs/"
(if appropriate), because that's how the name field of struct ref works.
As far as I can tell, the usage in walker:interpret_target() wouldn't have
worked previously, if it ever would have been used, which it wouldn't
(since the fetch process uses the hash instead of the name of the ref
there).
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-04-26 21:53:09 +02:00
|
|
|
int http_fetch_ref(const char *base, struct ref *ref)
|
2007-12-11 00:08:25 +01:00
|
|
|
{
|
2013-09-28 10:31:23 +02:00
|
|
|
struct http_get_options options = {0};
|
2007-12-11 00:08:25 +01:00
|
|
|
char *url;
|
|
|
|
struct strbuf buffer = STRBUF_INIT;
|
2009-06-06 10:43:55 +02:00
|
|
|
int ret = -1;
|
2007-12-11 00:08:25 +01:00
|
|
|
|
2013-09-28 10:31:23 +02:00
|
|
|
options.no_cache = 1;
|
|
|
|
|
Make walker.fetch_ref() take a struct ref.
This simplifies a few things, makes a few things slightly more
complicated, but, more importantly, allows that, when struct ref can
represent a symref, http_fetch_ref() can return one.
Incidentally makes the string that http_fetch_ref() gets include "refs/"
(if appropriate), because that's how the name field of struct ref works.
As far as I can tell, the usage in walker:interpret_target() wouldn't have
worked previously, if it ever would have been used, which it wouldn't
(since the fetch process uses the hash instead of the name of the ref
there).
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-04-26 21:53:09 +02:00
|
|
|
url = quote_ref_url(base, ref->name);
|
2013-09-28 10:31:23 +02:00
|
|
|
if (http_get_strbuf(url, &buffer, &options) == HTTP_OK) {
|
2009-06-06 10:43:55 +02:00
|
|
|
strbuf_rtrim(&buffer);
|
|
|
|
if (buffer.len == 40)
|
2015-11-10 03:22:20 +01:00
|
|
|
ret = get_oid_hex(buffer.buf, &ref->old_oid);
|
2013-11-30 21:55:40 +01:00
|
|
|
else if (starts_with(buffer.buf, "ref: ")) {
|
2009-06-06 10:43:55 +02:00
|
|
|
ref->symref = xstrdup(buffer.buf + 5);
|
|
|
|
ret = 0;
|
2007-12-11 00:08:25 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
strbuf_release(&buffer);
|
|
|
|
free(url);
|
|
|
|
return ret;
|
|
|
|
}
|
2009-06-06 10:43:59 +02:00
|
|
|
|
|
|
|
/* Helpers for fetching packs */
|
2010-04-19 16:23:10 +02:00
|
|
|
static char *fetch_pack_index(unsigned char *sha1, const char *base_url)
|
2009-06-06 10:43:59 +02:00
|
|
|
{
|
2010-04-19 16:23:10 +02:00
|
|
|
char *url, *tmp;
|
2009-06-06 10:43:59 +02:00
|
|
|
struct strbuf buf = STRBUF_INIT;
|
|
|
|
|
|
|
|
if (http_is_verbose)
|
2010-04-19 16:23:05 +02:00
|
|
|
fprintf(stderr, "Getting index for pack %s\n", sha1_to_hex(sha1));
|
2009-06-06 10:43:59 +02:00
|
|
|
|
|
|
|
end_url_with_slash(&buf, base_url);
|
2010-04-19 16:23:05 +02:00
|
|
|
strbuf_addf(&buf, "objects/pack/pack-%s.idx", sha1_to_hex(sha1));
|
2009-06-06 10:43:59 +02:00
|
|
|
url = strbuf_detach(&buf, NULL);
|
|
|
|
|
2010-04-19 16:23:10 +02:00
|
|
|
strbuf_addf(&buf, "%s.temp", sha1_pack_index_name(sha1));
|
|
|
|
tmp = strbuf_detach(&buf, NULL);
|
|
|
|
|
2013-10-24 22:17:19 +02:00
|
|
|
if (http_get_file(url, tmp, NULL) != HTTP_OK) {
|
2012-04-30 02:28:45 +02:00
|
|
|
error("Unable to get pack index %s", url);
|
2010-04-19 16:23:10 +02:00
|
|
|
free(tmp);
|
|
|
|
tmp = NULL;
|
|
|
|
}
|
2009-06-06 10:43:59 +02:00
|
|
|
|
|
|
|
free(url);
|
2010-04-19 16:23:10 +02:00
|
|
|
return tmp;
|
2009-06-06 10:43:59 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
static int fetch_and_setup_pack_index(struct packed_git **packs_head,
|
|
|
|
unsigned char *sha1, const char *base_url)
|
|
|
|
{
|
|
|
|
struct packed_git *new_pack;
|
2010-04-19 16:23:10 +02:00
|
|
|
char *tmp_idx = NULL;
|
|
|
|
int ret;
|
2009-06-06 10:43:59 +02:00
|
|
|
|
2010-04-19 16:23:10 +02:00
|
|
|
if (has_pack_index(sha1)) {
|
dumb-http: do not pass NULL path to parse_pack_index
Once upon a time, dumb http always fetched .idx files
directly into their final location, and then checked their
validity with parse_pack_index. This was refactored in
commit 750ef42 (http-fetch: Use temporary files for
pack-*.idx until verified, 2010-04-19), which uses the
following logic:
1. If we have the idx already in place, see if it's
valid (using parse_pack_index). If so, use it.
2. Otherwise, fetch the .idx to a tempfile, check
that, and if so move it into place.
3. Either way, fetch the pack itself if necessary.
However, it got step 1 wrong. We pass a NULL path parameter
to parse_pack_index, so an existing .idx file always looks
broken. Worse, we do not treat this broken .idx as an
opportunity to re-fetch, but instead return an error,
ignoring the pack entirely. This can lead to a dumb-http
fetch failing to retrieve the necessary objects.
This doesn't come up much in practice, because it must be a
packfile that we found out about (and whose .idx we stored)
during an earlier dumb-http fetch, but whose packfile we
_didn't_ fetch. I.e., we did a partial clone of a
repository, didn't need some packfiles, and now a followup
fetch needs them.
Discovery and tests by Charles Bailey <charles@hashpling.org>.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-01-27 21:02:27 +01:00
|
|
|
new_pack = parse_pack_index(sha1, sha1_pack_index_name(sha1));
|
2010-04-19 16:23:10 +02:00
|
|
|
if (!new_pack)
|
|
|
|
return -1; /* parse_pack_index() already issued error message */
|
|
|
|
goto add_pack;
|
|
|
|
}
|
|
|
|
|
|
|
|
tmp_idx = fetch_pack_index(sha1, base_url);
|
|
|
|
if (!tmp_idx)
|
2009-06-06 10:43:59 +02:00
|
|
|
return -1;
|
|
|
|
|
2010-04-19 16:23:10 +02:00
|
|
|
new_pack = parse_pack_index(sha1, tmp_idx);
|
|
|
|
if (!new_pack) {
|
|
|
|
unlink(tmp_idx);
|
|
|
|
free(tmp_idx);
|
|
|
|
|
2009-06-06 10:43:59 +02:00
|
|
|
return -1; /* parse_pack_index() already issued error message */
|
2010-04-19 16:23:10 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
ret = verify_pack_index(new_pack);
|
|
|
|
if (!ret) {
|
|
|
|
close_pack_index(new_pack);
|
2015-08-07 23:40:24 +02:00
|
|
|
ret = finalize_object_file(tmp_idx, sha1_pack_index_name(sha1));
|
2010-04-19 16:23:10 +02:00
|
|
|
}
|
|
|
|
free(tmp_idx);
|
|
|
|
if (ret)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
add_pack:
|
2009-06-06 10:43:59 +02:00
|
|
|
new_pack->next = *packs_head;
|
|
|
|
*packs_head = new_pack;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int http_get_info_packs(const char *base_url, struct packed_git **packs_head)
|
|
|
|
{
|
2013-09-28 10:31:23 +02:00
|
|
|
struct http_get_options options = {0};
|
2009-06-06 10:43:59 +02:00
|
|
|
int ret = 0, i = 0;
|
|
|
|
char *url, *data;
|
|
|
|
struct strbuf buf = STRBUF_INIT;
|
|
|
|
unsigned char sha1[20];
|
|
|
|
|
|
|
|
end_url_with_slash(&buf, base_url);
|
|
|
|
strbuf_addstr(&buf, "objects/info/packs");
|
|
|
|
url = strbuf_detach(&buf, NULL);
|
|
|
|
|
2013-09-28 10:31:23 +02:00
|
|
|
options.no_cache = 1;
|
|
|
|
ret = http_get_strbuf(url, &buf, &options);
|
2009-06-06 10:43:59 +02:00
|
|
|
if (ret != HTTP_OK)
|
|
|
|
goto cleanup;
|
|
|
|
|
|
|
|
data = buf.buf;
|
|
|
|
while (i < buf.len) {
|
|
|
|
switch (data[i]) {
|
|
|
|
case 'P':
|
|
|
|
i++;
|
|
|
|
if (i + 52 <= buf.len &&
|
2013-11-30 21:55:40 +01:00
|
|
|
starts_with(data + i, " pack-") &&
|
|
|
|
starts_with(data + i + 46, ".pack\n")) {
|
2009-06-06 10:43:59 +02:00
|
|
|
get_sha1_hex(data + i + 6, sha1);
|
|
|
|
fetch_and_setup_pack_index(packs_head, sha1,
|
|
|
|
base_url);
|
|
|
|
i += 51;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
default:
|
|
|
|
while (i < buf.len && data[i] != '\n')
|
|
|
|
i++;
|
|
|
|
}
|
|
|
|
i++;
|
|
|
|
}
|
|
|
|
|
|
|
|
cleanup:
|
|
|
|
free(url);
|
|
|
|
return ret;
|
|
|
|
}
|
2009-06-06 10:44:01 +02:00
|
|
|
|
|
|
|
void release_http_pack_request(struct http_pack_request *preq)
|
|
|
|
{
|
|
|
|
if (preq->packfile != NULL) {
|
|
|
|
fclose(preq->packfile);
|
|
|
|
preq->packfile = NULL;
|
|
|
|
}
|
|
|
|
preq->slot = NULL;
|
|
|
|
free(preq->url);
|
2015-03-21 01:28:06 +01:00
|
|
|
free(preq);
|
2009-06-06 10:44:01 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
int finish_http_pack_request(struct http_pack_request *preq)
|
|
|
|
{
|
|
|
|
struct packed_git **lst;
|
2010-04-17 22:07:36 +02:00
|
|
|
struct packed_git *p = preq->target;
|
2010-04-19 16:23:09 +02:00
|
|
|
char *tmp_idx;
|
2015-09-24 23:07:09 +02:00
|
|
|
size_t len;
|
2014-08-19 21:09:35 +02:00
|
|
|
struct child_process ip = CHILD_PROCESS_INIT;
|
2010-04-19 16:23:09 +02:00
|
|
|
const char *ip_argv[8];
|
2009-06-06 10:44:01 +02:00
|
|
|
|
2010-04-19 16:23:09 +02:00
|
|
|
close_pack_index(p);
|
2009-06-06 10:44:01 +02:00
|
|
|
|
2010-04-17 22:07:37 +02:00
|
|
|
fclose(preq->packfile);
|
|
|
|
preq->packfile = NULL;
|
2009-06-06 10:44:01 +02:00
|
|
|
|
|
|
|
lst = preq->lst;
|
2010-04-17 22:07:36 +02:00
|
|
|
while (*lst != p)
|
2009-06-06 10:44:01 +02:00
|
|
|
lst = &((*lst)->next);
|
|
|
|
*lst = (*lst)->next;
|
|
|
|
|
2015-09-24 23:07:09 +02:00
|
|
|
if (!strip_suffix(preq->tmpfile, ".pack.temp", &len))
|
|
|
|
die("BUG: pack tmpfile does not end in .pack.temp?");
|
|
|
|
tmp_idx = xstrfmt("%.*s.idx.temp", (int)len, preq->tmpfile);
|
2010-04-19 16:23:09 +02:00
|
|
|
|
|
|
|
ip_argv[0] = "index-pack";
|
|
|
|
ip_argv[1] = "-o";
|
|
|
|
ip_argv[2] = tmp_idx;
|
|
|
|
ip_argv[3] = preq->tmpfile;
|
|
|
|
ip_argv[4] = NULL;
|
|
|
|
|
|
|
|
ip.argv = ip_argv;
|
|
|
|
ip.git_cmd = 1;
|
|
|
|
ip.no_stdin = 1;
|
|
|
|
ip.no_stdout = 1;
|
|
|
|
|
|
|
|
if (run_command(&ip)) {
|
|
|
|
unlink(preq->tmpfile);
|
|
|
|
unlink(tmp_idx);
|
|
|
|
free(tmp_idx);
|
2009-06-06 10:44:01 +02:00
|
|
|
return -1;
|
2010-04-19 16:23:09 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
unlink(sha1_pack_index_name(p->sha1));
|
2009-06-06 10:44:01 +02:00
|
|
|
|
2015-08-07 23:40:24 +02:00
|
|
|
if (finalize_object_file(preq->tmpfile, sha1_pack_name(p->sha1))
|
|
|
|
|| finalize_object_file(tmp_idx, sha1_pack_index_name(p->sha1))) {
|
2010-04-19 16:23:09 +02:00
|
|
|
free(tmp_idx);
|
2009-06-06 10:44:01 +02:00
|
|
|
return -1;
|
2010-04-19 16:23:09 +02:00
|
|
|
}
|
2009-06-06 10:44:01 +02:00
|
|
|
|
2010-04-19 16:23:09 +02:00
|
|
|
install_packed_git(p);
|
|
|
|
free(tmp_idx);
|
2009-06-06 10:44:01 +02:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
struct http_pack_request *new_http_pack_request(
|
|
|
|
struct packed_git *target, const char *base_url)
|
|
|
|
{
|
2015-11-02 23:10:27 +01:00
|
|
|
off_t prev_posn = 0;
|
2009-06-06 10:44:01 +02:00
|
|
|
struct strbuf buf = STRBUF_INIT;
|
|
|
|
struct http_pack_request *preq;
|
|
|
|
|
2011-08-03 13:54:03 +02:00
|
|
|
preq = xcalloc(1, sizeof(*preq));
|
2009-06-06 10:44:01 +02:00
|
|
|
preq->target = target;
|
|
|
|
|
|
|
|
end_url_with_slash(&buf, base_url);
|
|
|
|
strbuf_addf(&buf, "objects/pack/pack-%s.pack",
|
|
|
|
sha1_to_hex(target->sha1));
|
2009-08-10 17:59:55 +02:00
|
|
|
preq->url = strbuf_detach(&buf, NULL);
|
2009-06-06 10:44:01 +02:00
|
|
|
|
2010-04-19 16:46:43 +02:00
|
|
|
snprintf(preq->tmpfile, sizeof(preq->tmpfile), "%s.temp",
|
|
|
|
sha1_pack_name(target->sha1));
|
2009-06-06 10:44:01 +02:00
|
|
|
preq->packfile = fopen(preq->tmpfile, "a");
|
|
|
|
if (!preq->packfile) {
|
|
|
|
error("Unable to open local file %s for pack",
|
|
|
|
preq->tmpfile);
|
|
|
|
goto abort;
|
|
|
|
}
|
|
|
|
|
|
|
|
preq->slot = get_active_slot();
|
|
|
|
curl_easy_setopt(preq->slot->curl, CURLOPT_FILE, preq->packfile);
|
|
|
|
curl_easy_setopt(preq->slot->curl, CURLOPT_WRITEFUNCTION, fwrite);
|
2009-08-10 17:59:55 +02:00
|
|
|
curl_easy_setopt(preq->slot->curl, CURLOPT_URL, preq->url);
|
2009-06-06 10:44:01 +02:00
|
|
|
curl_easy_setopt(preq->slot->curl, CURLOPT_HTTPHEADER,
|
|
|
|
no_pragma_header);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If there is data present from a previous transfer attempt,
|
|
|
|
* resume where it left off
|
|
|
|
*/
|
2015-11-02 23:10:27 +01:00
|
|
|
prev_posn = ftello(preq->packfile);
|
2009-06-06 10:44:01 +02:00
|
|
|
if (prev_posn>0) {
|
|
|
|
if (http_is_verbose)
|
|
|
|
fprintf(stderr,
|
2015-11-12 01:07:42 +01:00
|
|
|
"Resuming fetch of pack %s at byte %"PRIuMAX"\n",
|
|
|
|
sha1_to_hex(target->sha1), (uintmax_t)prev_posn);
|
2015-11-02 22:39:58 +01:00
|
|
|
http_opt_request_remainder(preq->slot->curl, prev_posn);
|
2009-06-06 10:44:01 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
return preq;
|
|
|
|
|
|
|
|
abort:
|
2009-08-10 17:59:55 +02:00
|
|
|
free(preq->url);
|
2009-08-10 17:55:48 +02:00
|
|
|
free(preq);
|
2009-06-06 10:44:01 +02:00
|
|
|
return NULL;
|
|
|
|
}
|
http*: add helper methods for fetching objects (loose)
The code handling the fetching of loose objects in http-push.c and
http-walker.c have been refactored into new methods and a new struct
(object_http_request) in http.c. They are not meant to be invoked
elsewhere.
The new methods in http.c are
- new_http_object_request
- process_http_object_request
- finish_http_object_request
- abort_http_object_request
- release_http_object_request
and the new struct is http_object_request.
RANGER_HEADER_SIZE and no_pragma_header is no longer made available
outside of http.c, since after the above changes, there are no other
instances of usage outside of http.c.
Remove members of the transfer_request struct in http-push.c and
http-walker.c, including filename, real_sha1 and zret, as they are used
no longer used.
Move the methods append_remote_object_url() and get_remote_object_url()
from http-push.c to http.c. Additionally, get_remote_object_url() is no
longer defined only when USE_CURL_MULTI is defined, since
non-USE_CURL_MULTI code in http.c uses it (namely, in
new_http_object_request()).
Refactor code from http-push.c::start_fetch_loose() and
http-walker.c::start_object_fetch_request() that deals with the details
of coming up with the filename to store the retrieved object, resuming
a previously aborted request, and making a new curl request, into a new
function, new_http_object_request().
Refactor code from http-walker.c::process_object_request() into the
function, process_http_object_request().
Refactor code from http-push.c::finish_request() and
http-walker.c::finish_object_request() into a new function,
finish_http_object_request(). It returns the result of the
move_temp_to_file() invocation.
Add a function, release_http_object_request(), which cleans up object
request data. http-push.c and http-walker.c invoke this function
separately; http-push.c::release_request() and
http-walker.c::release_object_request() do not invoke this function.
Add a function, abort_http_object_request(), which unlink()s the object
file and invokes release_http_object_request(). Update
http-walker.c::abort_object_request() to use this.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 10:44:02 +02:00
|
|
|
|
|
|
|
/* Helpers for fetching objects (loose) */
|
2011-05-03 17:47:27 +02:00
|
|
|
static size_t fwrite_sha1_file(char *ptr, size_t eltsize, size_t nmemb,
|
http*: add helper methods for fetching objects (loose)
The code handling the fetching of loose objects in http-push.c and
http-walker.c have been refactored into new methods and a new struct
(object_http_request) in http.c. They are not meant to be invoked
elsewhere.
The new methods in http.c are
- new_http_object_request
- process_http_object_request
- finish_http_object_request
- abort_http_object_request
- release_http_object_request
and the new struct is http_object_request.
RANGER_HEADER_SIZE and no_pragma_header is no longer made available
outside of http.c, since after the above changes, there are no other
instances of usage outside of http.c.
Remove members of the transfer_request struct in http-push.c and
http-walker.c, including filename, real_sha1 and zret, as they are used
no longer used.
Move the methods append_remote_object_url() and get_remote_object_url()
from http-push.c to http.c. Additionally, get_remote_object_url() is no
longer defined only when USE_CURL_MULTI is defined, since
non-USE_CURL_MULTI code in http.c uses it (namely, in
new_http_object_request()).
Refactor code from http-push.c::start_fetch_loose() and
http-walker.c::start_object_fetch_request() that deals with the details
of coming up with the filename to store the retrieved object, resuming
a previously aborted request, and making a new curl request, into a new
function, new_http_object_request().
Refactor code from http-walker.c::process_object_request() into the
function, process_http_object_request().
Refactor code from http-push.c::finish_request() and
http-walker.c::finish_object_request() into a new function,
finish_http_object_request(). It returns the result of the
move_temp_to_file() invocation.
Add a function, release_http_object_request(), which cleans up object
request data. http-push.c and http-walker.c invoke this function
separately; http-push.c::release_request() and
http-walker.c::release_object_request() do not invoke this function.
Add a function, abort_http_object_request(), which unlink()s the object
file and invokes release_http_object_request(). Update
http-walker.c::abort_object_request() to use this.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 10:44:02 +02:00
|
|
|
void *data)
|
|
|
|
{
|
|
|
|
unsigned char expn[4096];
|
|
|
|
size_t size = eltsize * nmemb;
|
|
|
|
int posn = 0;
|
2016-07-11 22:51:30 +02:00
|
|
|
struct http_object_request *freq = data;
|
|
|
|
struct active_request_slot *slot = freq->slot;
|
|
|
|
|
|
|
|
if (slot) {
|
|
|
|
CURLcode c = curl_easy_getinfo(slot->curl, CURLINFO_HTTP_CODE,
|
|
|
|
&slot->http_code);
|
|
|
|
if (c != CURLE_OK)
|
|
|
|
die("BUG: curl_easy_getinfo for HTTP code failed: %s",
|
|
|
|
curl_easy_strerror(c));
|
|
|
|
if (slot->http_code >= 400)
|
|
|
|
return size;
|
|
|
|
}
|
|
|
|
|
http*: add helper methods for fetching objects (loose)
The code handling the fetching of loose objects in http-push.c and
http-walker.c have been refactored into new methods and a new struct
(object_http_request) in http.c. They are not meant to be invoked
elsewhere.
The new methods in http.c are
- new_http_object_request
- process_http_object_request
- finish_http_object_request
- abort_http_object_request
- release_http_object_request
and the new struct is http_object_request.
RANGER_HEADER_SIZE and no_pragma_header is no longer made available
outside of http.c, since after the above changes, there are no other
instances of usage outside of http.c.
Remove members of the transfer_request struct in http-push.c and
http-walker.c, including filename, real_sha1 and zret, as they are used
no longer used.
Move the methods append_remote_object_url() and get_remote_object_url()
from http-push.c to http.c. Additionally, get_remote_object_url() is no
longer defined only when USE_CURL_MULTI is defined, since
non-USE_CURL_MULTI code in http.c uses it (namely, in
new_http_object_request()).
Refactor code from http-push.c::start_fetch_loose() and
http-walker.c::start_object_fetch_request() that deals with the details
of coming up with the filename to store the retrieved object, resuming
a previously aborted request, and making a new curl request, into a new
function, new_http_object_request().
Refactor code from http-walker.c::process_object_request() into the
function, process_http_object_request().
Refactor code from http-push.c::finish_request() and
http-walker.c::finish_object_request() into a new function,
finish_http_object_request(). It returns the result of the
move_temp_to_file() invocation.
Add a function, release_http_object_request(), which cleans up object
request data. http-push.c and http-walker.c invoke this function
separately; http-push.c::release_request() and
http-walker.c::release_object_request() do not invoke this function.
Add a function, abort_http_object_request(), which unlink()s the object
file and invokes release_http_object_request(). Update
http-walker.c::abort_object_request() to use this.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 10:44:02 +02:00
|
|
|
do {
|
|
|
|
ssize_t retval = xwrite(freq->localfile,
|
|
|
|
(char *) ptr + posn, size - posn);
|
|
|
|
if (retval < 0)
|
|
|
|
return posn;
|
|
|
|
posn += retval;
|
|
|
|
} while (posn < size);
|
|
|
|
|
|
|
|
freq->stream.avail_in = size;
|
2011-05-03 17:47:27 +02:00
|
|
|
freq->stream.next_in = (void *)ptr;
|
http*: add helper methods for fetching objects (loose)
The code handling the fetching of loose objects in http-push.c and
http-walker.c have been refactored into new methods and a new struct
(object_http_request) in http.c. They are not meant to be invoked
elsewhere.
The new methods in http.c are
- new_http_object_request
- process_http_object_request
- finish_http_object_request
- abort_http_object_request
- release_http_object_request
and the new struct is http_object_request.
RANGER_HEADER_SIZE and no_pragma_header is no longer made available
outside of http.c, since after the above changes, there are no other
instances of usage outside of http.c.
Remove members of the transfer_request struct in http-push.c and
http-walker.c, including filename, real_sha1 and zret, as they are used
no longer used.
Move the methods append_remote_object_url() and get_remote_object_url()
from http-push.c to http.c. Additionally, get_remote_object_url() is no
longer defined only when USE_CURL_MULTI is defined, since
non-USE_CURL_MULTI code in http.c uses it (namely, in
new_http_object_request()).
Refactor code from http-push.c::start_fetch_loose() and
http-walker.c::start_object_fetch_request() that deals with the details
of coming up with the filename to store the retrieved object, resuming
a previously aborted request, and making a new curl request, into a new
function, new_http_object_request().
Refactor code from http-walker.c::process_object_request() into the
function, process_http_object_request().
Refactor code from http-push.c::finish_request() and
http-walker.c::finish_object_request() into a new function,
finish_http_object_request(). It returns the result of the
move_temp_to_file() invocation.
Add a function, release_http_object_request(), which cleans up object
request data. http-push.c and http-walker.c invoke this function
separately; http-push.c::release_request() and
http-walker.c::release_object_request() do not invoke this function.
Add a function, abort_http_object_request(), which unlink()s the object
file and invokes release_http_object_request(). Update
http-walker.c::abort_object_request() to use this.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 10:44:02 +02:00
|
|
|
do {
|
|
|
|
freq->stream.next_out = expn;
|
|
|
|
freq->stream.avail_out = sizeof(expn);
|
|
|
|
freq->zret = git_inflate(&freq->stream, Z_SYNC_FLUSH);
|
|
|
|
git_SHA1_Update(&freq->c, expn,
|
|
|
|
sizeof(expn) - freq->stream.avail_out);
|
|
|
|
} while (freq->stream.avail_in && freq->zret == Z_OK);
|
|
|
|
return size;
|
|
|
|
}
|
|
|
|
|
|
|
|
struct http_object_request *new_http_object_request(const char *base_url,
|
|
|
|
unsigned char *sha1)
|
|
|
|
{
|
|
|
|
char *hex = sha1_to_hex(sha1);
|
2014-02-21 17:32:05 +01:00
|
|
|
const char *filename;
|
http*: add helper methods for fetching objects (loose)
The code handling the fetching of loose objects in http-push.c and
http-walker.c have been refactored into new methods and a new struct
(object_http_request) in http.c. They are not meant to be invoked
elsewhere.
The new methods in http.c are
- new_http_object_request
- process_http_object_request
- finish_http_object_request
- abort_http_object_request
- release_http_object_request
and the new struct is http_object_request.
RANGER_HEADER_SIZE and no_pragma_header is no longer made available
outside of http.c, since after the above changes, there are no other
instances of usage outside of http.c.
Remove members of the transfer_request struct in http-push.c and
http-walker.c, including filename, real_sha1 and zret, as they are used
no longer used.
Move the methods append_remote_object_url() and get_remote_object_url()
from http-push.c to http.c. Additionally, get_remote_object_url() is no
longer defined only when USE_CURL_MULTI is defined, since
non-USE_CURL_MULTI code in http.c uses it (namely, in
new_http_object_request()).
Refactor code from http-push.c::start_fetch_loose() and
http-walker.c::start_object_fetch_request() that deals with the details
of coming up with the filename to store the retrieved object, resuming
a previously aborted request, and making a new curl request, into a new
function, new_http_object_request().
Refactor code from http-walker.c::process_object_request() into the
function, process_http_object_request().
Refactor code from http-push.c::finish_request() and
http-walker.c::finish_object_request() into a new function,
finish_http_object_request(). It returns the result of the
move_temp_to_file() invocation.
Add a function, release_http_object_request(), which cleans up object
request data. http-push.c and http-walker.c invoke this function
separately; http-push.c::release_request() and
http-walker.c::release_object_request() do not invoke this function.
Add a function, abort_http_object_request(), which unlink()s the object
file and invokes release_http_object_request(). Update
http-walker.c::abort_object_request() to use this.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 10:44:02 +02:00
|
|
|
char prevfile[PATH_MAX];
|
|
|
|
int prevlocal;
|
2011-05-03 17:47:27 +02:00
|
|
|
char prev_buf[PREV_BUF_SIZE];
|
http*: add helper methods for fetching objects (loose)
The code handling the fetching of loose objects in http-push.c and
http-walker.c have been refactored into new methods and a new struct
(object_http_request) in http.c. They are not meant to be invoked
elsewhere.
The new methods in http.c are
- new_http_object_request
- process_http_object_request
- finish_http_object_request
- abort_http_object_request
- release_http_object_request
and the new struct is http_object_request.
RANGER_HEADER_SIZE and no_pragma_header is no longer made available
outside of http.c, since after the above changes, there are no other
instances of usage outside of http.c.
Remove members of the transfer_request struct in http-push.c and
http-walker.c, including filename, real_sha1 and zret, as they are used
no longer used.
Move the methods append_remote_object_url() and get_remote_object_url()
from http-push.c to http.c. Additionally, get_remote_object_url() is no
longer defined only when USE_CURL_MULTI is defined, since
non-USE_CURL_MULTI code in http.c uses it (namely, in
new_http_object_request()).
Refactor code from http-push.c::start_fetch_loose() and
http-walker.c::start_object_fetch_request() that deals with the details
of coming up with the filename to store the retrieved object, resuming
a previously aborted request, and making a new curl request, into a new
function, new_http_object_request().
Refactor code from http-walker.c::process_object_request() into the
function, process_http_object_request().
Refactor code from http-push.c::finish_request() and
http-walker.c::finish_object_request() into a new function,
finish_http_object_request(). It returns the result of the
move_temp_to_file() invocation.
Add a function, release_http_object_request(), which cleans up object
request data. http-push.c and http-walker.c invoke this function
separately; http-push.c::release_request() and
http-walker.c::release_object_request() do not invoke this function.
Add a function, abort_http_object_request(), which unlink()s the object
file and invokes release_http_object_request(). Update
http-walker.c::abort_object_request() to use this.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 10:44:02 +02:00
|
|
|
ssize_t prev_read = 0;
|
2015-11-02 23:10:27 +01:00
|
|
|
off_t prev_posn = 0;
|
http*: add helper methods for fetching objects (loose)
The code handling the fetching of loose objects in http-push.c and
http-walker.c have been refactored into new methods and a new struct
(object_http_request) in http.c. They are not meant to be invoked
elsewhere.
The new methods in http.c are
- new_http_object_request
- process_http_object_request
- finish_http_object_request
- abort_http_object_request
- release_http_object_request
and the new struct is http_object_request.
RANGER_HEADER_SIZE and no_pragma_header is no longer made available
outside of http.c, since after the above changes, there are no other
instances of usage outside of http.c.
Remove members of the transfer_request struct in http-push.c and
http-walker.c, including filename, real_sha1 and zret, as they are used
no longer used.
Move the methods append_remote_object_url() and get_remote_object_url()
from http-push.c to http.c. Additionally, get_remote_object_url() is no
longer defined only when USE_CURL_MULTI is defined, since
non-USE_CURL_MULTI code in http.c uses it (namely, in
new_http_object_request()).
Refactor code from http-push.c::start_fetch_loose() and
http-walker.c::start_object_fetch_request() that deals with the details
of coming up with the filename to store the retrieved object, resuming
a previously aborted request, and making a new curl request, into a new
function, new_http_object_request().
Refactor code from http-walker.c::process_object_request() into the
function, process_http_object_request().
Refactor code from http-push.c::finish_request() and
http-walker.c::finish_object_request() into a new function,
finish_http_object_request(). It returns the result of the
move_temp_to_file() invocation.
Add a function, release_http_object_request(), which cleans up object
request data. http-push.c and http-walker.c invoke this function
separately; http-push.c::release_request() and
http-walker.c::release_object_request() do not invoke this function.
Add a function, abort_http_object_request(), which unlink()s the object
file and invokes release_http_object_request(). Update
http-walker.c::abort_object_request() to use this.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 10:44:02 +02:00
|
|
|
struct http_object_request *freq;
|
|
|
|
|
2011-08-03 13:54:03 +02:00
|
|
|
freq = xcalloc(1, sizeof(*freq));
|
http*: add helper methods for fetching objects (loose)
The code handling the fetching of loose objects in http-push.c and
http-walker.c have been refactored into new methods and a new struct
(object_http_request) in http.c. They are not meant to be invoked
elsewhere.
The new methods in http.c are
- new_http_object_request
- process_http_object_request
- finish_http_object_request
- abort_http_object_request
- release_http_object_request
and the new struct is http_object_request.
RANGER_HEADER_SIZE and no_pragma_header is no longer made available
outside of http.c, since after the above changes, there are no other
instances of usage outside of http.c.
Remove members of the transfer_request struct in http-push.c and
http-walker.c, including filename, real_sha1 and zret, as they are used
no longer used.
Move the methods append_remote_object_url() and get_remote_object_url()
from http-push.c to http.c. Additionally, get_remote_object_url() is no
longer defined only when USE_CURL_MULTI is defined, since
non-USE_CURL_MULTI code in http.c uses it (namely, in
new_http_object_request()).
Refactor code from http-push.c::start_fetch_loose() and
http-walker.c::start_object_fetch_request() that deals with the details
of coming up with the filename to store the retrieved object, resuming
a previously aborted request, and making a new curl request, into a new
function, new_http_object_request().
Refactor code from http-walker.c::process_object_request() into the
function, process_http_object_request().
Refactor code from http-push.c::finish_request() and
http-walker.c::finish_object_request() into a new function,
finish_http_object_request(). It returns the result of the
move_temp_to_file() invocation.
Add a function, release_http_object_request(), which cleans up object
request data. http-push.c and http-walker.c invoke this function
separately; http-push.c::release_request() and
http-walker.c::release_object_request() do not invoke this function.
Add a function, abort_http_object_request(), which unlink()s the object
file and invokes release_http_object_request(). Update
http-walker.c::abort_object_request() to use this.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 10:44:02 +02:00
|
|
|
hashcpy(freq->sha1, sha1);
|
|
|
|
freq->localfile = -1;
|
|
|
|
|
|
|
|
filename = sha1_file_name(sha1);
|
|
|
|
snprintf(freq->tmpfile, sizeof(freq->tmpfile),
|
|
|
|
"%s.temp", filename);
|
|
|
|
|
|
|
|
snprintf(prevfile, sizeof(prevfile), "%s.prev", filename);
|
|
|
|
unlink_or_warn(prevfile);
|
|
|
|
rename(freq->tmpfile, prevfile);
|
|
|
|
unlink_or_warn(freq->tmpfile);
|
|
|
|
|
|
|
|
if (freq->localfile != -1)
|
|
|
|
error("fd leakage in start: %d", freq->localfile);
|
|
|
|
freq->localfile = open(freq->tmpfile,
|
|
|
|
O_WRONLY | O_CREAT | O_EXCL, 0666);
|
|
|
|
/*
|
|
|
|
* This could have failed due to the "lazy directory creation";
|
|
|
|
* try to mkdir the last path component.
|
|
|
|
*/
|
|
|
|
if (freq->localfile < 0 && errno == ENOENT) {
|
|
|
|
char *dir = strrchr(freq->tmpfile, '/');
|
|
|
|
if (dir) {
|
|
|
|
*dir = 0;
|
|
|
|
mkdir(freq->tmpfile, 0777);
|
|
|
|
*dir = '/';
|
|
|
|
}
|
|
|
|
freq->localfile = open(freq->tmpfile,
|
|
|
|
O_WRONLY | O_CREAT | O_EXCL, 0666);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (freq->localfile < 0) {
|
2016-05-08 11:47:48 +02:00
|
|
|
error_errno("Couldn't create temporary file %s", freq->tmpfile);
|
http*: add helper methods for fetching objects (loose)
The code handling the fetching of loose objects in http-push.c and
http-walker.c have been refactored into new methods and a new struct
(object_http_request) in http.c. They are not meant to be invoked
elsewhere.
The new methods in http.c are
- new_http_object_request
- process_http_object_request
- finish_http_object_request
- abort_http_object_request
- release_http_object_request
and the new struct is http_object_request.
RANGER_HEADER_SIZE and no_pragma_header is no longer made available
outside of http.c, since after the above changes, there are no other
instances of usage outside of http.c.
Remove members of the transfer_request struct in http-push.c and
http-walker.c, including filename, real_sha1 and zret, as they are used
no longer used.
Move the methods append_remote_object_url() and get_remote_object_url()
from http-push.c to http.c. Additionally, get_remote_object_url() is no
longer defined only when USE_CURL_MULTI is defined, since
non-USE_CURL_MULTI code in http.c uses it (namely, in
new_http_object_request()).
Refactor code from http-push.c::start_fetch_loose() and
http-walker.c::start_object_fetch_request() that deals with the details
of coming up with the filename to store the retrieved object, resuming
a previously aborted request, and making a new curl request, into a new
function, new_http_object_request().
Refactor code from http-walker.c::process_object_request() into the
function, process_http_object_request().
Refactor code from http-push.c::finish_request() and
http-walker.c::finish_object_request() into a new function,
finish_http_object_request(). It returns the result of the
move_temp_to_file() invocation.
Add a function, release_http_object_request(), which cleans up object
request data. http-push.c and http-walker.c invoke this function
separately; http-push.c::release_request() and
http-walker.c::release_object_request() do not invoke this function.
Add a function, abort_http_object_request(), which unlink()s the object
file and invokes release_http_object_request(). Update
http-walker.c::abort_object_request() to use this.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 10:44:02 +02:00
|
|
|
goto abort;
|
|
|
|
}
|
|
|
|
|
|
|
|
git_inflate_init(&freq->stream);
|
|
|
|
|
|
|
|
git_SHA1_Init(&freq->c);
|
|
|
|
|
2009-08-10 17:59:55 +02:00
|
|
|
freq->url = get_remote_object_url(base_url, hex, 0);
|
http*: add helper methods for fetching objects (loose)
The code handling the fetching of loose objects in http-push.c and
http-walker.c have been refactored into new methods and a new struct
(object_http_request) in http.c. They are not meant to be invoked
elsewhere.
The new methods in http.c are
- new_http_object_request
- process_http_object_request
- finish_http_object_request
- abort_http_object_request
- release_http_object_request
and the new struct is http_object_request.
RANGER_HEADER_SIZE and no_pragma_header is no longer made available
outside of http.c, since after the above changes, there are no other
instances of usage outside of http.c.
Remove members of the transfer_request struct in http-push.c and
http-walker.c, including filename, real_sha1 and zret, as they are used
no longer used.
Move the methods append_remote_object_url() and get_remote_object_url()
from http-push.c to http.c. Additionally, get_remote_object_url() is no
longer defined only when USE_CURL_MULTI is defined, since
non-USE_CURL_MULTI code in http.c uses it (namely, in
new_http_object_request()).
Refactor code from http-push.c::start_fetch_loose() and
http-walker.c::start_object_fetch_request() that deals with the details
of coming up with the filename to store the retrieved object, resuming
a previously aborted request, and making a new curl request, into a new
function, new_http_object_request().
Refactor code from http-walker.c::process_object_request() into the
function, process_http_object_request().
Refactor code from http-push.c::finish_request() and
http-walker.c::finish_object_request() into a new function,
finish_http_object_request(). It returns the result of the
move_temp_to_file() invocation.
Add a function, release_http_object_request(), which cleans up object
request data. http-push.c and http-walker.c invoke this function
separately; http-push.c::release_request() and
http-walker.c::release_object_request() do not invoke this function.
Add a function, abort_http_object_request(), which unlink()s the object
file and invokes release_http_object_request(). Update
http-walker.c::abort_object_request() to use this.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 10:44:02 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If a previous temp file is present, process what was already
|
|
|
|
* fetched.
|
|
|
|
*/
|
|
|
|
prevlocal = open(prevfile, O_RDONLY);
|
|
|
|
if (prevlocal != -1) {
|
|
|
|
do {
|
|
|
|
prev_read = xread(prevlocal, prev_buf, PREV_BUF_SIZE);
|
|
|
|
if (prev_read>0) {
|
|
|
|
if (fwrite_sha1_file(prev_buf,
|
|
|
|
1,
|
|
|
|
prev_read,
|
|
|
|
freq) == prev_read) {
|
|
|
|
prev_posn += prev_read;
|
|
|
|
} else {
|
|
|
|
prev_read = -1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
} while (prev_read > 0);
|
|
|
|
close(prevlocal);
|
|
|
|
}
|
|
|
|
unlink_or_warn(prevfile);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Reset inflate/SHA1 if there was an error reading the previous temp
|
|
|
|
* file; also rewind to the beginning of the local file.
|
|
|
|
*/
|
|
|
|
if (prev_read == -1) {
|
|
|
|
memset(&freq->stream, 0, sizeof(freq->stream));
|
|
|
|
git_inflate_init(&freq->stream);
|
|
|
|
git_SHA1_Init(&freq->c);
|
|
|
|
if (prev_posn>0) {
|
|
|
|
prev_posn = 0;
|
|
|
|
lseek(freq->localfile, 0, SEEK_SET);
|
2009-08-10 18:05:06 +02:00
|
|
|
if (ftruncate(freq->localfile, 0) < 0) {
|
2016-05-08 11:47:48 +02:00
|
|
|
error_errno("Couldn't truncate temporary file %s",
|
|
|
|
freq->tmpfile);
|
2009-08-10 18:05:06 +02:00
|
|
|
goto abort;
|
|
|
|
}
|
http*: add helper methods for fetching objects (loose)
The code handling the fetching of loose objects in http-push.c and
http-walker.c have been refactored into new methods and a new struct
(object_http_request) in http.c. They are not meant to be invoked
elsewhere.
The new methods in http.c are
- new_http_object_request
- process_http_object_request
- finish_http_object_request
- abort_http_object_request
- release_http_object_request
and the new struct is http_object_request.
RANGER_HEADER_SIZE and no_pragma_header is no longer made available
outside of http.c, since after the above changes, there are no other
instances of usage outside of http.c.
Remove members of the transfer_request struct in http-push.c and
http-walker.c, including filename, real_sha1 and zret, as they are used
no longer used.
Move the methods append_remote_object_url() and get_remote_object_url()
from http-push.c to http.c. Additionally, get_remote_object_url() is no
longer defined only when USE_CURL_MULTI is defined, since
non-USE_CURL_MULTI code in http.c uses it (namely, in
new_http_object_request()).
Refactor code from http-push.c::start_fetch_loose() and
http-walker.c::start_object_fetch_request() that deals with the details
of coming up with the filename to store the retrieved object, resuming
a previously aborted request, and making a new curl request, into a new
function, new_http_object_request().
Refactor code from http-walker.c::process_object_request() into the
function, process_http_object_request().
Refactor code from http-push.c::finish_request() and
http-walker.c::finish_object_request() into a new function,
finish_http_object_request(). It returns the result of the
move_temp_to_file() invocation.
Add a function, release_http_object_request(), which cleans up object
request data. http-push.c and http-walker.c invoke this function
separately; http-push.c::release_request() and
http-walker.c::release_object_request() do not invoke this function.
Add a function, abort_http_object_request(), which unlink()s the object
file and invokes release_http_object_request(). Update
http-walker.c::abort_object_request() to use this.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 10:44:02 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
freq->slot = get_active_slot();
|
|
|
|
|
|
|
|
curl_easy_setopt(freq->slot->curl, CURLOPT_FILE, freq);
|
2016-07-11 22:51:30 +02:00
|
|
|
curl_easy_setopt(freq->slot->curl, CURLOPT_FAILONERROR, 0);
|
http*: add helper methods for fetching objects (loose)
The code handling the fetching of loose objects in http-push.c and
http-walker.c have been refactored into new methods and a new struct
(object_http_request) in http.c. They are not meant to be invoked
elsewhere.
The new methods in http.c are
- new_http_object_request
- process_http_object_request
- finish_http_object_request
- abort_http_object_request
- release_http_object_request
and the new struct is http_object_request.
RANGER_HEADER_SIZE and no_pragma_header is no longer made available
outside of http.c, since after the above changes, there are no other
instances of usage outside of http.c.
Remove members of the transfer_request struct in http-push.c and
http-walker.c, including filename, real_sha1 and zret, as they are used
no longer used.
Move the methods append_remote_object_url() and get_remote_object_url()
from http-push.c to http.c. Additionally, get_remote_object_url() is no
longer defined only when USE_CURL_MULTI is defined, since
non-USE_CURL_MULTI code in http.c uses it (namely, in
new_http_object_request()).
Refactor code from http-push.c::start_fetch_loose() and
http-walker.c::start_object_fetch_request() that deals with the details
of coming up with the filename to store the retrieved object, resuming
a previously aborted request, and making a new curl request, into a new
function, new_http_object_request().
Refactor code from http-walker.c::process_object_request() into the
function, process_http_object_request().
Refactor code from http-push.c::finish_request() and
http-walker.c::finish_object_request() into a new function,
finish_http_object_request(). It returns the result of the
move_temp_to_file() invocation.
Add a function, release_http_object_request(), which cleans up object
request data. http-push.c and http-walker.c invoke this function
separately; http-push.c::release_request() and
http-walker.c::release_object_request() do not invoke this function.
Add a function, abort_http_object_request(), which unlink()s the object
file and invokes release_http_object_request(). Update
http-walker.c::abort_object_request() to use this.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 10:44:02 +02:00
|
|
|
curl_easy_setopt(freq->slot->curl, CURLOPT_WRITEFUNCTION, fwrite_sha1_file);
|
|
|
|
curl_easy_setopt(freq->slot->curl, CURLOPT_ERRORBUFFER, freq->errorstr);
|
2009-08-10 17:59:55 +02:00
|
|
|
curl_easy_setopt(freq->slot->curl, CURLOPT_URL, freq->url);
|
http*: add helper methods for fetching objects (loose)
The code handling the fetching of loose objects in http-push.c and
http-walker.c have been refactored into new methods and a new struct
(object_http_request) in http.c. They are not meant to be invoked
elsewhere.
The new methods in http.c are
- new_http_object_request
- process_http_object_request
- finish_http_object_request
- abort_http_object_request
- release_http_object_request
and the new struct is http_object_request.
RANGER_HEADER_SIZE and no_pragma_header is no longer made available
outside of http.c, since after the above changes, there are no other
instances of usage outside of http.c.
Remove members of the transfer_request struct in http-push.c and
http-walker.c, including filename, real_sha1 and zret, as they are used
no longer used.
Move the methods append_remote_object_url() and get_remote_object_url()
from http-push.c to http.c. Additionally, get_remote_object_url() is no
longer defined only when USE_CURL_MULTI is defined, since
non-USE_CURL_MULTI code in http.c uses it (namely, in
new_http_object_request()).
Refactor code from http-push.c::start_fetch_loose() and
http-walker.c::start_object_fetch_request() that deals with the details
of coming up with the filename to store the retrieved object, resuming
a previously aborted request, and making a new curl request, into a new
function, new_http_object_request().
Refactor code from http-walker.c::process_object_request() into the
function, process_http_object_request().
Refactor code from http-push.c::finish_request() and
http-walker.c::finish_object_request() into a new function,
finish_http_object_request(). It returns the result of the
move_temp_to_file() invocation.
Add a function, release_http_object_request(), which cleans up object
request data. http-push.c and http-walker.c invoke this function
separately; http-push.c::release_request() and
http-walker.c::release_object_request() do not invoke this function.
Add a function, abort_http_object_request(), which unlink()s the object
file and invokes release_http_object_request(). Update
http-walker.c::abort_object_request() to use this.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 10:44:02 +02:00
|
|
|
curl_easy_setopt(freq->slot->curl, CURLOPT_HTTPHEADER, no_pragma_header);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If we have successfully processed data from a previous fetch
|
|
|
|
* attempt, only fetch the data we don't already have.
|
|
|
|
*/
|
|
|
|
if (prev_posn>0) {
|
|
|
|
if (http_is_verbose)
|
|
|
|
fprintf(stderr,
|
2015-11-12 01:07:42 +01:00
|
|
|
"Resuming fetch of object %s at byte %"PRIuMAX"\n",
|
|
|
|
hex, (uintmax_t)prev_posn);
|
2015-11-02 22:39:58 +01:00
|
|
|
http_opt_request_remainder(freq->slot->curl, prev_posn);
|
http*: add helper methods for fetching objects (loose)
The code handling the fetching of loose objects in http-push.c and
http-walker.c have been refactored into new methods and a new struct
(object_http_request) in http.c. They are not meant to be invoked
elsewhere.
The new methods in http.c are
- new_http_object_request
- process_http_object_request
- finish_http_object_request
- abort_http_object_request
- release_http_object_request
and the new struct is http_object_request.
RANGER_HEADER_SIZE and no_pragma_header is no longer made available
outside of http.c, since after the above changes, there are no other
instances of usage outside of http.c.
Remove members of the transfer_request struct in http-push.c and
http-walker.c, including filename, real_sha1 and zret, as they are used
no longer used.
Move the methods append_remote_object_url() and get_remote_object_url()
from http-push.c to http.c. Additionally, get_remote_object_url() is no
longer defined only when USE_CURL_MULTI is defined, since
non-USE_CURL_MULTI code in http.c uses it (namely, in
new_http_object_request()).
Refactor code from http-push.c::start_fetch_loose() and
http-walker.c::start_object_fetch_request() that deals with the details
of coming up with the filename to store the retrieved object, resuming
a previously aborted request, and making a new curl request, into a new
function, new_http_object_request().
Refactor code from http-walker.c::process_object_request() into the
function, process_http_object_request().
Refactor code from http-push.c::finish_request() and
http-walker.c::finish_object_request() into a new function,
finish_http_object_request(). It returns the result of the
move_temp_to_file() invocation.
Add a function, release_http_object_request(), which cleans up object
request data. http-push.c and http-walker.c invoke this function
separately; http-push.c::release_request() and
http-walker.c::release_object_request() do not invoke this function.
Add a function, abort_http_object_request(), which unlink()s the object
file and invokes release_http_object_request(). Update
http-walker.c::abort_object_request() to use this.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 10:44:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
return freq;
|
|
|
|
|
|
|
|
abort:
|
2009-08-10 17:59:55 +02:00
|
|
|
free(freq->url);
|
http*: add helper methods for fetching objects (loose)
The code handling the fetching of loose objects in http-push.c and
http-walker.c have been refactored into new methods and a new struct
(object_http_request) in http.c. They are not meant to be invoked
elsewhere.
The new methods in http.c are
- new_http_object_request
- process_http_object_request
- finish_http_object_request
- abort_http_object_request
- release_http_object_request
and the new struct is http_object_request.
RANGER_HEADER_SIZE and no_pragma_header is no longer made available
outside of http.c, since after the above changes, there are no other
instances of usage outside of http.c.
Remove members of the transfer_request struct in http-push.c and
http-walker.c, including filename, real_sha1 and zret, as they are used
no longer used.
Move the methods append_remote_object_url() and get_remote_object_url()
from http-push.c to http.c. Additionally, get_remote_object_url() is no
longer defined only when USE_CURL_MULTI is defined, since
non-USE_CURL_MULTI code in http.c uses it (namely, in
new_http_object_request()).
Refactor code from http-push.c::start_fetch_loose() and
http-walker.c::start_object_fetch_request() that deals with the details
of coming up with the filename to store the retrieved object, resuming
a previously aborted request, and making a new curl request, into a new
function, new_http_object_request().
Refactor code from http-walker.c::process_object_request() into the
function, process_http_object_request().
Refactor code from http-push.c::finish_request() and
http-walker.c::finish_object_request() into a new function,
finish_http_object_request(). It returns the result of the
move_temp_to_file() invocation.
Add a function, release_http_object_request(), which cleans up object
request data. http-push.c and http-walker.c invoke this function
separately; http-push.c::release_request() and
http-walker.c::release_object_request() do not invoke this function.
Add a function, abort_http_object_request(), which unlink()s the object
file and invokes release_http_object_request(). Update
http-walker.c::abort_object_request() to use this.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 10:44:02 +02:00
|
|
|
free(freq);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
void process_http_object_request(struct http_object_request *freq)
|
|
|
|
{
|
|
|
|
if (freq->slot == NULL)
|
|
|
|
return;
|
|
|
|
freq->curl_result = freq->slot->curl_result;
|
|
|
|
freq->http_code = freq->slot->http_code;
|
|
|
|
freq->slot = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
int finish_http_object_request(struct http_object_request *freq)
|
|
|
|
{
|
|
|
|
struct stat st;
|
|
|
|
|
|
|
|
close(freq->localfile);
|
|
|
|
freq->localfile = -1;
|
|
|
|
|
|
|
|
process_http_object_request(freq);
|
|
|
|
|
|
|
|
if (freq->http_code == 416) {
|
2010-01-03 17:20:30 +01:00
|
|
|
warning("requested range invalid; we may already have all the data.");
|
http*: add helper methods for fetching objects (loose)
The code handling the fetching of loose objects in http-push.c and
http-walker.c have been refactored into new methods and a new struct
(object_http_request) in http.c. They are not meant to be invoked
elsewhere.
The new methods in http.c are
- new_http_object_request
- process_http_object_request
- finish_http_object_request
- abort_http_object_request
- release_http_object_request
and the new struct is http_object_request.
RANGER_HEADER_SIZE and no_pragma_header is no longer made available
outside of http.c, since after the above changes, there are no other
instances of usage outside of http.c.
Remove members of the transfer_request struct in http-push.c and
http-walker.c, including filename, real_sha1 and zret, as they are used
no longer used.
Move the methods append_remote_object_url() and get_remote_object_url()
from http-push.c to http.c. Additionally, get_remote_object_url() is no
longer defined only when USE_CURL_MULTI is defined, since
non-USE_CURL_MULTI code in http.c uses it (namely, in
new_http_object_request()).
Refactor code from http-push.c::start_fetch_loose() and
http-walker.c::start_object_fetch_request() that deals with the details
of coming up with the filename to store the retrieved object, resuming
a previously aborted request, and making a new curl request, into a new
function, new_http_object_request().
Refactor code from http-walker.c::process_object_request() into the
function, process_http_object_request().
Refactor code from http-push.c::finish_request() and
http-walker.c::finish_object_request() into a new function,
finish_http_object_request(). It returns the result of the
move_temp_to_file() invocation.
Add a function, release_http_object_request(), which cleans up object
request data. http-push.c and http-walker.c invoke this function
separately; http-push.c::release_request() and
http-walker.c::release_object_request() do not invoke this function.
Add a function, abort_http_object_request(), which unlink()s the object
file and invokes release_http_object_request(). Update
http-walker.c::abort_object_request() to use this.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 10:44:02 +02:00
|
|
|
} else if (freq->curl_result != CURLE_OK) {
|
|
|
|
if (stat(freq->tmpfile, &st) == 0)
|
|
|
|
if (st.st_size == 0)
|
|
|
|
unlink_or_warn(freq->tmpfile);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
git_inflate_end(&freq->stream);
|
|
|
|
git_SHA1_Final(freq->real_sha1, &freq->c);
|
|
|
|
if (freq->zret != Z_STREAM_END) {
|
|
|
|
unlink_or_warn(freq->tmpfile);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
if (hashcmp(freq->sha1, freq->real_sha1)) {
|
|
|
|
unlink_or_warn(freq->tmpfile);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
freq->rename =
|
2015-08-07 23:40:24 +02:00
|
|
|
finalize_object_file(freq->tmpfile, sha1_file_name(freq->sha1));
|
http*: add helper methods for fetching objects (loose)
The code handling the fetching of loose objects in http-push.c and
http-walker.c have been refactored into new methods and a new struct
(object_http_request) in http.c. They are not meant to be invoked
elsewhere.
The new methods in http.c are
- new_http_object_request
- process_http_object_request
- finish_http_object_request
- abort_http_object_request
- release_http_object_request
and the new struct is http_object_request.
RANGER_HEADER_SIZE and no_pragma_header is no longer made available
outside of http.c, since after the above changes, there are no other
instances of usage outside of http.c.
Remove members of the transfer_request struct in http-push.c and
http-walker.c, including filename, real_sha1 and zret, as they are used
no longer used.
Move the methods append_remote_object_url() and get_remote_object_url()
from http-push.c to http.c. Additionally, get_remote_object_url() is no
longer defined only when USE_CURL_MULTI is defined, since
non-USE_CURL_MULTI code in http.c uses it (namely, in
new_http_object_request()).
Refactor code from http-push.c::start_fetch_loose() and
http-walker.c::start_object_fetch_request() that deals with the details
of coming up with the filename to store the retrieved object, resuming
a previously aborted request, and making a new curl request, into a new
function, new_http_object_request().
Refactor code from http-walker.c::process_object_request() into the
function, process_http_object_request().
Refactor code from http-push.c::finish_request() and
http-walker.c::finish_object_request() into a new function,
finish_http_object_request(). It returns the result of the
move_temp_to_file() invocation.
Add a function, release_http_object_request(), which cleans up object
request data. http-push.c and http-walker.c invoke this function
separately; http-push.c::release_request() and
http-walker.c::release_object_request() do not invoke this function.
Add a function, abort_http_object_request(), which unlink()s the object
file and invokes release_http_object_request(). Update
http-walker.c::abort_object_request() to use this.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 10:44:02 +02:00
|
|
|
|
|
|
|
return freq->rename;
|
|
|
|
}
|
|
|
|
|
|
|
|
void abort_http_object_request(struct http_object_request *freq)
|
|
|
|
{
|
|
|
|
unlink_or_warn(freq->tmpfile);
|
|
|
|
|
|
|
|
release_http_object_request(freq);
|
|
|
|
}
|
|
|
|
|
|
|
|
void release_http_object_request(struct http_object_request *freq)
|
|
|
|
{
|
|
|
|
if (freq->localfile != -1) {
|
|
|
|
close(freq->localfile);
|
|
|
|
freq->localfile = -1;
|
|
|
|
}
|
|
|
|
if (freq->url != NULL) {
|
|
|
|
free(freq->url);
|
|
|
|
freq->url = NULL;
|
|
|
|
}
|
2009-08-26 14:20:53 +02:00
|
|
|
if (freq->slot != NULL) {
|
|
|
|
freq->slot->callback_func = NULL;
|
|
|
|
freq->slot->callback_data = NULL;
|
|
|
|
release_active_slot(freq->slot);
|
|
|
|
freq->slot = NULL;
|
|
|
|
}
|
http*: add helper methods for fetching objects (loose)
The code handling the fetching of loose objects in http-push.c and
http-walker.c have been refactored into new methods and a new struct
(object_http_request) in http.c. They are not meant to be invoked
elsewhere.
The new methods in http.c are
- new_http_object_request
- process_http_object_request
- finish_http_object_request
- abort_http_object_request
- release_http_object_request
and the new struct is http_object_request.
RANGER_HEADER_SIZE and no_pragma_header is no longer made available
outside of http.c, since after the above changes, there are no other
instances of usage outside of http.c.
Remove members of the transfer_request struct in http-push.c and
http-walker.c, including filename, real_sha1 and zret, as they are used
no longer used.
Move the methods append_remote_object_url() and get_remote_object_url()
from http-push.c to http.c. Additionally, get_remote_object_url() is no
longer defined only when USE_CURL_MULTI is defined, since
non-USE_CURL_MULTI code in http.c uses it (namely, in
new_http_object_request()).
Refactor code from http-push.c::start_fetch_loose() and
http-walker.c::start_object_fetch_request() that deals with the details
of coming up with the filename to store the retrieved object, resuming
a previously aborted request, and making a new curl request, into a new
function, new_http_object_request().
Refactor code from http-walker.c::process_object_request() into the
function, process_http_object_request().
Refactor code from http-push.c::finish_request() and
http-walker.c::finish_object_request() into a new function,
finish_http_object_request(). It returns the result of the
move_temp_to_file() invocation.
Add a function, release_http_object_request(), which cleans up object
request data. http-push.c and http-walker.c invoke this function
separately; http-push.c::release_request() and
http-walker.c::release_object_request() do not invoke this function.
Add a function, abort_http_object_request(), which unlink()s the object
file and invokes release_http_object_request(). Update
http-walker.c::abort_object_request() to use this.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-06 10:44:02 +02:00
|
|
|
}
|