The code for warning_errno/die_errno has been refactored and a new
error_errno() reporting helper is introduced.
* nd/error-errno: (41 commits)
wrapper.c: use warning_errno()
vcs-svn: use error_errno()
upload-pack.c: use error_errno()
unpack-trees.c: use error_errno()
transport-helper.c: use error_errno()
sha1_file.c: use {error,die,warning}_errno()
server-info.c: use error_errno()
sequencer.c: use error_errno()
run-command.c: use error_errno()
rerere.c: use error_errno() and warning_errno()
reachable.c: use error_errno()
mailmap.c: use error_errno()
ident.c: use warning_errno()
http.c: use error_errno() and warning_errno()
grep.c: use error_errno()
gpg-interface.c: use error_errno()
fast-import.c: use error_errno()
entry.c: use error_errno()
editor.c: use error_errno()
diff-no-index.c: use error_errno()
...
"http.cookieFile" configuration variable clearly wants a pathname,
but we forgot to treat it as such by e.g. applying tilde expansion.
* bn/http-cookiefile-config:
http: expand http.cookieFile as a path
Documentation: config: improve word ordering for http.cookieFile
HTTP transport clients learned to throw extra HTTP headers at the
server, specified via http.extraHeader configuration variable.
* js/http-custom-headers:
http: support sending custom HTTP headers
This should handle .gitconfig files that specify things like:
[http]
cookieFile = "~/.gitcookies"
Signed-off-by: Brian Norris <computersforpeace@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The socks5:// proxy support added back in 2.6.4 days was not aware
that socks5h:// proxies behave differently.
* jc/http-socks5h:
http: differentiate socks5:// and socks5h://
We introduce a way to send custom HTTP headers with all requests.
This allows us, for example, to send an extra token from build agents
for temporary access to private repositories. (This is the use case that
triggered this patch.)
This feature can be used like this:
git -c http.extraheader='Secret: sssh!' fetch $URL $REF
Note that `curl_easy_setopt(..., CURLOPT_HTTPHEADER, ...)` takes only
a single list, overriding any previous call. This means we have to
collect _all_ of the headers we want to use into a single list, and
feed it to cURL in one shot. Since we already unconditionally set a
"pragma" header when initializing the curl handles, we can add our new
headers to that list.
For callers which override the default header list (like probe_rpc),
we provide `http_copy_default_headers()` so they can do the same
trick.
Big thanks to Jeff King and Junio Hamano for their outstanding help and
patient reviews.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The socks5:// proxy support added back in 2.6.4 days was not aware
that socks5h:// proxies behave differently.
* jc/http-socks5h:
http: differentiate socks5:// and socks5h://
Felix Ruess <felix.ruess@gmail.com> noticed that with configuration
$ git config --global 'http.proxy=socks5h://127.0.0.1:1080'
connections to remote sites time out, waiting for DNS resolution.
The logic to detect various flavours of SOCKS proxy and ask the
libcurl layer to use appropriate one understands the proxy string
that begin with socks5, socks4a, etc., but does not know socks5h,
and we end up using CURLPROXY_SOCKS5. The correct one to use is
CURLPROXY_SOCKS5_HOSTNAME.
https://curl.haxx.se/libcurl/c/CURLOPT_PROXY.html says
..., socks5h:// (the last one to enable socks5 and asking the
proxy to do the resolving, also known as CURLPROXY_SOCKS5_HOSTNAME
type).
which is consistent with the way the breakage was reported.
Tested-by: Felix Ruess <felix.ruess@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Curl and its families honor several proxy related environment variables:
* http_proxy and https_proxy define proxy for http/https connections.
* no_proxy (a comma separated hosts) defines hosts bypass the proxy.
This command will bypass the bad-proxy and connect to the host directly:
no_proxy=* https_proxy=http://bad-proxy/ \
curl -sk https://google.com/
Before commit 372370f (http: use credential API to handle proxy auth...),
Environment variable "no_proxy" will take effect if the config variable
"http.proxy" is not set. So the following comamnd won't fail if not
behind a firewall.
no_proxy=* https_proxy=http://bad-proxy/ \
git ls-remote https://github.com/git/git
But commit 372370f not only read git config variable "http.proxy", but
also read "http_proxy" and "https_proxy" environment variables, and set
the curl option using:
curl_easy_setopt(result, CURLOPT_PROXY, proxy_auth.host);
This caused "no_proxy" environment variable not working any more.
Set extra curl option "CURLOPT_NOPROXY" will fix this issue.
Signed-off-by: Jiang Xin <xin.jiang@huawei.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
You can now set http.[<url>.]pinnedpubkey to specify the pinned
public key when building with recent enough versions of libcURL.
* ce/https-public-key-pinning:
http: implement public key pinning
Some authentication methods do not need username or password, but
libcurl needs some hint that it needs to perform authentication.
Supplying an empty username and password string is a valid way to
do so, but you can set the http.[<url>.]emptyAuth configuration
variable to achieve the same, if you find it cleaner.
* bc/http-empty-auth:
http: add option to try authentication without username
"git fetch" and friends that make network connections can now be
told to only use ipv4 (or ipv6).
* ew/force-ipv4:
connect & http: support -4 and -6 switches for remote operations
Add the http.pinnedpubkey configuration option for public key
pinning. It allows any string supported by libcurl --
base64(sha256(pubkey)) or filename of the full public key.
If cURL does not support pinning (is too old) output a warning to the
user.
Signed-off-by: Christoph Egger <christoph@christoph-egger.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Performing GSS-Negotiate authentication using Kerberos does not require
specifying a username or password, since that information is already
included in the ticket itself. However, libcurl refuses to perform
authentication if it has not been provided with a username and password.
Add an option, http.emptyAuth, that provides libcurl with an empty
username and password to make it attempt authentication anyway.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Sometimes it is necessary to force IPv4-only or IPv6-only operation
on networks where name lookups may return a non-routable address and
stall remote operations.
The ssh(1) command has an equivalent switches which we may pass when
we run them. There may be old ssh(1) implementations out there
which do not support these switches; they should report the
appropriate error in that case.
rsync support is untouched for now since it is deprecated and
scheduled to be removed.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Reviewed-by: Torsten Bögershausen <tboegi@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Currently, the only way to pass proxy credentials to curl is by including them
in the proxy URL. Usually, this means they will end up on disk unencrypted, one
way or another (by inclusion in ~/.gitconfig, shell profile or history). Since
proxy authentication often uses a domain user, credentials can be security
sensitive; therefore, a safer way of passing credentials is desirable.
If the configured proxy contains a username but not a password, query the
credential API for one. Also, make sure we approve/reject proxy credentials
properly.
For consistency reasons, add parsing of http_proxy/https_proxy/all_proxy
environment variables, which would otherwise be evaluated as a fallback by curl.
Without this, we would have different semantics for git configuration and
environment variables.
Helped-by: Junio C Hamano <gitster@pobox.com>
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Helped-by: Elia Pinto <gitter.spiros@gmail.com>
Signed-off-by: Knut Franke <k.franke@science-computing.de>
Signed-off-by: Elia Pinto <gitter.spiros@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
CURLAUTH_ANY does not work with proxies which answer unauthenticated requests
with a 307 redirect to an error page instead of a 407 listing supported
authentication methods. Therefore, allow the authentication method to be set
using the environment variable GIT_HTTP_PROXY_AUTHMETHOD or configuration
variables http.proxyAuthmethod and remote.<name>.proxyAuthmethod (in analogy
to http.proxy and remote.<name>.proxy).
The following values are supported:
* anyauth (default)
* basic
* digest
* negotiate
* ntlm
Signed-off-by: Knut Franke <k.franke@science-computing.de>
Signed-off-by: Elia Pinto <gitter.spiros@gmail.com>
Helped-by: Junio C Hamano <gitster@pobox.com>
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Helped-by: Elia Pinto <gitter.spiros@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
More transition from "unsigned char[40]" to "struct object_id".
This needed a few merge fixups, but is mostly disentangled from other
topics.
* bc/object-id:
remote: convert functions to struct object_id
Remove get_object_hash.
Convert struct object to object_id
Add several uses of get_object_hash.
object: introduce get_object_hash macro.
ref_newer: convert to use struct object_id
push_refs_with_export: convert to struct object_id
get_remote_heads: convert to struct object_id
parse_fetch: convert to use struct object_id
add_sought_entry_mem: convert to struct object_id
Convert struct ref to use object_id.
sha1_file: introduce has_object_file helper.
* maint:
http: treat config options sslCAPath and sslCAInfo as paths
Documentation/diff: give --word-diff-regex=. example
filter-branch: deal with object name vs. pathname ambiguity in tree-filter
check-ignore: correct documentation about output
git-p4: clean up after p4 submit failure
git-p4: work with a detached head
git-p4: add option to system() to return subshell status
git-p4: add failing test for submit from detached head
remote-http(s): support SOCKS proxies
t5813: avoid creating urls that break on cygwin
Escape Git's exec path in contrib/rerere-train.sh script
allow hooks to ignore their standard input stream
rebase-i-exec: Allow space in SHELL_PATH
Documentation: make environment variable formatting more consistent
This enables ~ and ~user expansion for these config options.
Signed-off-by: Charles Bailey <cbailey32@bloomberg.net>
Signed-off-by: Jeff King <peff@peff.net>
Use struct object_id in three fields in struct ref and convert all the
necessary places that use it.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Jeff King <peff@peff.net>
With this patch we properly support SOCKS proxies, configured e.g. like
this:
git config http.proxy socks5://192.168.67.1:32767
Without this patch, Git mistakenly tries to use SOCKS proxies as if they
were HTTP proxies, resulting in a error message like:
fatal: unable to access 'http://.../': Proxy CONNECT aborted
This patch was required to work behind a faulty AP and scraped from
http://stackoverflow.com/questions/15227130/#15228479 and guarded with
an appropriate cURL version check by Johannes Schindelin.
Signed-off-by: Pat Thoyts <patthoyts@users.sourceforge.net>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit f8117f55 ("http: use off_t to store partial file size",
02-11-2015) changed the type of some variables from long to off_t.
Unfortunately, the off_t type is not portable and can be represented
by several different actual types (even multiple types on the same
platform). This makes it difficult to print an off_t variable in
a platform independent way. As a result, this commit causes gcc to
issue some printf format warnings on a couple of different platforms.
In order to suppress the warnings, change the format specifier to use
the PRIuMAX macro and cast the off_t argument to uintmax_t. (See also
the http_opt_request_remainder() function, which uses the same
solution).
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Jeff King <peff@peff.net>
When we try to resume transfer of a partially-downloaded
object or pack, we fopen() the existing file for append,
then use ftell() to get the current position. We use a
"long", which can hold only 2GB on a 32-bit system, even
though packfiles may be larger than that.
Let's switch to using off_t, which should hold any file size
our system is capable of storing. We need to use ftello() to
get the off_t. This is in POSIX and hopefully available
everywhere; if not, we should be able to wrap it by falling
back to ftell(), which would presumably return "-1" on such
a large file (and we would simply skip resuming in that case).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A HTTP server is permitted to return a non-range response to a HTTP
range request (and Apache httpd in fact does this in some cases).
While libcurl knows how to correctly handle this (by skipping bytes
before and after the requested range), it only turns on this handling
if it is aware that a range request is being made. By manually
setting the range header instead of using CURLOPT_RANGE, we were
hiding the fact that this was a range request from libcurl. This
could cause corruption.
Signed-off-by: David Turner <dturner@twopensource.com>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Many allocations that is manually counted (correctly) that are
followed by strcpy/sprintf have been replaced with a less error
prone constructs such as xstrfmt.
Macintosh-specific breakage was noticed and corrected in this
reroll.
* jk/war-on-sprintf: (70 commits)
name-rev: use strip_suffix to avoid magic numbers
use strbuf_complete to conditionally append slash
fsck: use for_each_loose_file_in_objdir
Makefile: drop D_INO_IN_DIRENT build knob
fsck: drop inode-sorting code
convert strncpy to memcpy
notes: document length of fanout path with a constant
color: add color_set helper for copying raw colors
prefer memcpy to strcpy
help: clean up kfmclient munging
receive-pack: simplify keep_arg computation
avoid sprintf and strcpy with flex arrays
use alloc_ref rather than hand-allocating "struct ref"
color: add overflow checks for parsing colors
drop strcpy in favor of raw sha1_to_hex
use sha1_to_hex_r() instead of strcpy
daemon: use cld->env_array when re-spawning
stat_tracking_info: convert to argv_array
http-push: use an argv_array for setup_revisions
fetch-pack: use argv_array for index-pack / unpack-objects
...
By default, libcurl will follow circular http redirects
forever. Let's put a cap on this so that somebody who can
trigger an automated fetch of an arbitrary repository (e.g.,
for CI) cannot convince git to loop infinitely.
The value chosen is 20, which is the same default that
Firefox uses.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Previously, libcurl would follow redirection to any protocol
it was compiled for support with. This is desirable to allow
redirection from HTTP to HTTPS. However, it would even
successfully allow redirection from HTTP to SFTP, a protocol
that git does not otherwise support at all. Furthermore
git's new protocol-whitelisting could be bypassed by
following a redirect within the remote helper, as it was
only enforced at transport selection time.
This patch limits redirects within libcurl to HTTP, HTTPS,
FTP and FTPS. If there is a protocol-whitelist present, this
list is limited to those also allowed by the whitelist. As
redirection happens from within libcurl, it is impossible
for an HTTP redirect to a protocol implemented within
another remote helper.
When the curl version git was compiled with is too old to
support restrictions on protocol redirection, we warn the
user if GIT_ALLOW_PROTOCOL restrictions were requested. This
is a little inaccurate, as even without that variable in the
environment, we would still restrict SFTP, etc, and we do
not warn in that case. But anything else means we would
literally warn every time git accesses an http remote.
This commit includes a test, but it is not as robust as we
would hope. It redirects an http request to ftp, and checks
that curl complained about the protocol, which means that we
are relying on curl's specific error message to know what
happened. Ideally we would redirect to a working ftp server
and confirm that we can clone without protocol restrictions,
and not with them. But we do not have a portable way of
providing an ftp server, nor any other protocol that curl
supports (https is the closest, but we would have to deal
with certificates).
[jk: added test and version warning]
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we want to convert "foo.pack" to "foo.idx", we do it by
duplicating the original string and then munging the bytes
in place. Let's use strip_suffix and xstrfmt instead, which
has several advantages:
1. It's more clear what the intent is.
2. It does not implicitly rely on the fact that
strlen(".idx") <= strlen(".pack") to avoid an overflow.
3. We communicate the assumption that the input file ends
with ".pack" (and get a run-time check that this is so).
4. We drop calls to strcpy, which makes auditing the code
base easier.
Likewise, we can do this to convert ".pack" to ".bitmap",
avoiding some manual memory computation.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We sometimes sprintf into fixed-size buffers when we know
that the buffer is large enough to fit the input (either
because it's a constant, or because it's numeric input that
is bounded in size). Likewise with strcpy of constant
strings.
However, these sites make it hard to audit sprintf and
strcpy calls for buffer overflows, as a reader has to
cross-reference the size of the array with the input. Let's
use xsnprintf instead, which communicates to a reader that
we don't expect this to overflow (and catches the mistake in
case we do).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A new configuration variable http.sslVersion can be used to specify
what specific version of SSL/TLS to use to make a connection.
* ep/http-configure-ssl-version:
http: add support for specifying the SSL version
Teach git about a new option, "http.sslVersion", which permits one
to specify the SSL version to use when negotiating SSL connections.
The setting can be overridden by the GIT_SSL_VERSION environment
variable.
Signed-off-by: Elia Pinto <gitter.spiros@gmail.com>
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since 5a688fe4 ("core.sharedrepository = 0mode" should set, not
loosen, 2009-03-25), we kept reminding ourselves:
NEEDSWORK: this should be renamed to finalize_temp_file() as
"moving" is only a part of what it does, when no patch between
master to pu changes the call sites of this function.
without doing anything about it. Let's do so.
The purpose of this function was not to move but to finalize. The
detail of the primarily implementation of finalizing was to link the
temporary file to its final name and then to unlink, which wasn't
even "moving". The alternative implementation did "move" by calling
rename(2), which is a fun tangent.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We used to ask libCURL to use the most secure authentication method
available when talking to an HTTP proxy only when we were told to
talk to one via configuration variables. We now ask libCURL to
always use the most secure authentication method, because the user
can tell libCURL to use an HTTP proxy via an environment variable
without using configuration variables.
* et/http-proxyauth:
http: always use any proxy auth method available
We set CURLOPT_PROXYAUTH to use the most secure authentication
method available only when the user has set configuration variables
to specify a proxy. However, libcurl also supports specifying a
proxy through environment variables. In that case libcurl defaults
to only using the Basic proxy authentication method, because we do
not use CURLOPT_PROXYAUTH.
Set CURLOPT_PROXYAUTH to always use the most secure authentication
method available, even when there is no git configuration telling us
to use a proxy. This allows the user to use environment variables to
configure a proxy that requires an authentication method different
from Basic.
Signed-off-by: Enrique A. Tobis <etobis@twosigma.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Introduce http.<url>.SSLCipherList configuration variable to tweak
the list of cipher suite to be used with libcURL when talking with
https:// sites.
* ls/http-ssl-cipher-list:
http: add support for specifying an SSL cipher list
Teach git about a new option, "http.sslCipherList", which permits one to
specify a list of ciphers to use when negotiating SSL connections. The
setting can be overwridden by the GIT_SSL_CIPHER_LIST environment
variable.
Signed-off-by: Lars Kellogg-Stedman <lars@redhat.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The cleanup function is used in 4 places now and it's always safe to
free up the memory as well.
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Calling setlocale(LC_MESSAGES, ...) directly from http.c, without
including <locale.h>, was causing compilation warnings. Move the
helper function to gettext.c that already includes the header and
where locale-related issues are handled.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We did not check the curl library version before using
CURLOPT_PROXYAUTH feature that may not exist.
* tc/missing-http-proxyauth:
http: support curl < 7.10.7
A broken pack .idx file in the receiving repository prevented the
dumb http transport from fetching a good copy of it from the other
side.
* jk/dumb-http-idx-fetch-fix:
dumb-http: do not pass NULL path to parse_pack_index
Using environment variable LANGUAGE and friends on the client side,
HTTP-based transports now send Accept-Language when making requests.
* ye/http-accept-language:
http: add Accept-Language header if possible
A broken pack .idx file in the receiving repository prevented the
dumb http transport from fetching a good copy of it from the other
side.
* jk/dumb-http-idx-fetch-fix:
dumb-http: do not pass NULL path to parse_pack_index
Mark file-local symbols as "static", and drop functions that nobody
uses.
* jc/unused-symbols:
shallow.c: make check_shallow_file_for_update() static
remote.c: make clear_cas_option() static
urlmatch.c: make match_urls() static
revision.c: make save_parents() and free_saved_parents() static
line-log.c: make line_log_data_init() static
pack-bitmap.c: make pack_bitmap_filename() static
prompt.c: remove git_getpass() nobody uses
http.c: make finish_active_slot() and handle_curl_result() static
Commit dd61399 introduced support for http proxies that require
authentication but it relies on the CURL_PROXYAUTH option which was
added in curl 7.10.7.
This makes sure proxy authentication is only enabled if libcurl can
support it.
Signed-off-by: Tom G. Christensen <tgc@statsbiblioteket.dk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add an Accept-Language header which indicates the user's preferred
languages defined by $LANGUAGE, $LC_ALL, $LC_MESSAGES and $LANG.
Examples:
LANGUAGE= -> ""
LANGUAGE=ko:en -> "Accept-Language: ko, en;q=0.9, *;q=0.1"
LANGUAGE=ko LANG=en_US.UTF-8 -> "Accept-Language: ko, *;q=0.1"
LANGUAGE= LANG=en_US.UTF-8 -> "Accept-Language: en-US, *;q=0.1"
This gives git servers a chance to display remote error messages in
the user's preferred language.
Limit the number of languages to 1,000 because q-value must not be
smaller than 0.001, and limit the length of Accept-Language header to
4,000 bytes for some HTTP servers which cannot accept such long header.
Signed-off-by: Yi EungJun <eungjun.yi@navercorp.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Once upon a time, dumb http always fetched .idx files
directly into their final location, and then checked their
validity with parse_pack_index. This was refactored in
commit 750ef42 (http-fetch: Use temporary files for
pack-*.idx until verified, 2010-04-19), which uses the
following logic:
1. If we have the idx already in place, see if it's
valid (using parse_pack_index). If so, use it.
2. Otherwise, fetch the .idx to a tempfile, check
that, and if so move it into place.
3. Either way, fetch the pack itself if necessary.
However, it got step 1 wrong. We pass a NULL path parameter
to parse_pack_index, so an existing .idx file always looks
broken. Worse, we do not treat this broken .idx as an
opportunity to re-fetch, but instead return an error,
ignoring the pack entirely. This can lead to a dumb-http
fetch failing to retrieve the necessary objects.
This doesn't come up much in practice, because it must be a
packfile that we found out about (and whose .idx we stored)
during an earlier dumb-http fetch, but whose packfile we
_didn't_ fetch. I.e., we did a partial clone of a
repository, didn't need some packfiles, and now a followup
fetch needs them.
Discovery and tests by Charles Bailey <charles@hashpling.org>.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
They used to be used directly by remote-curl.c for the smart-http
protocol. But they got wrapped by run_one_slot() in beed336 (http:
never use curl_easy_perform, 2014-02-18). Any future users are
expected to follow that route.
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Apache servers using mod_auth_kerb can be configured to allow the user
to authenticate either using Negotiate (using the Kerberos ticket) or
Basic authentication (using the Kerberos password). Often, one will
want to use Negotiate authentication if it is available, but fall back
to Basic authentication if the ticket is missing or expired.
However, libcurl will try very hard to use something other than Basic
auth, even over HTTPS. If Basic and something else are offered, libcurl
will never attempt to use Basic, even if the other option fails.
Teach the HTTP client code to stop trying authentication mechanisms that
don't use a password (currently Negotiate) after the first failure,
since if they failed the first time, they will never succeed.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
CodingGuidelines states that the first #include in C files should be
git-compat-util.h or another header file that includes it, such as
cache.h or builtin.h.
Signed-off-by: David Aguilar <davvid@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Unless there is a good reason, we should use die() rather than
fprintf/exit. We can also shorten the message to match other curl init
failures (and match our usual lowercase no-full-stop style).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Most struct child_process variables are cleared using memset first after
declaration. Provide a macro, CHILD_PROCESS_INIT, that can be used to
initialize them statically instead. That's shorter, doesn't require a
function call and is slightly more readable (especially given that we
already have STRBUF_INIT, ARGV_ARRAY_INIT etc.).
Helped-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* jk/skip-prefix:
http-push: refactor parsing of remote object names
imap-send: use skip_prefix instead of using magic numbers
use skip_prefix to avoid repeated calculations
git: avoid magic number with skip_prefix
fetch-pack: refactor parsing in get_ack
fast-import: refactor parsing of spaces
stat_opt: check extra strlen call
daemon: use skip_prefix to avoid magic numbers
fast-import: use skip_prefix for parsing input
use skip_prefix to avoid repeating strings
use skip_prefix to avoid magic numbers
transport-helper: avoid reading past end-of-string
fast-import: fix read of uninitialized argv memory
apply: use skip_prefix instead of raw addition
refactor skip_prefix to return a boolean
avoid using skip_prefix as a boolean
daemon: mark some strings as const
parse_diff_color_slot: drop ofs parameter
In some cases, we use starts_with to check for a prefix, and
then use an already-calculated prefix length to advance a
pointer past the prefix. There are no magic numbers or
duplicated strings here, but we can still make the code
simpler and more obvious by using skip_prefix.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
extract_content_type() could not extract a charset parameter if the
parameter is not the first one and there is a whitespace and a following
semicolon just before the parameter. For example:
text/plain; format=fixed ;charset=utf-8
And it also could not handle correctly some other cases, such as:
text/plain; charset=utf-8; format=fixed
text/plain; some-param="a long value with ;semicolons;"; charset=utf-8
Thanks-to: Jeff King <peff@peff.net>
Signed-off-by: Yi EungJun <eungjun.yi@navercorp.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is specified by RFC 2616 as the default if no "charset"
parameter is given.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since the previous commit, we now give a sanitized,
shortened version of the content-type header to any callers
who ask for it.
This patch adds back a way for them to cleanly access
specific parameters to the type. We could easily extract all
parameters and make them available via a string_list, but:
1. That complicates the interface and memory management.
2. In practice, no planned callers care about anything
except the charset.
This patch therefore goes with the simplest thing, and we
can expand or change the interface later if it becomes
necessary.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we get a content-type from curl, we get the whole
header line, including any parameters, and without any
normalization (like downcasing or whitespace) applied.
If we later try to match it with strcmp() or even
strcasecmp(), we may get false negatives.
This could cause two visible behaviors:
1. We might fail to recognize a smart-http server by its
content-type.
2. We might fail to relay text/plain error messages to
users (especially if they contain a charset parameter).
This patch teaches the http code to extract and normalize
just the type/subtype portion of the string. This is
technically passing out less information to the callers, who
can no longer see the parameters. But none of the current
callers cares, and a future patch will add back an
easier-to-use method for accessing those parameters.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* mh/object-code-cleanup:
sha1_file.c: document a bunch of functions defined in the file
sha1_file_name(): declare to return a const string
find_pack_entry(): document last_found_pack
replace_object: use struct members instead of an array
Change the return value of sha1_file_name() to (const char *).
(Callers have no business mucking about here.) Change callers
accordingly, deleting a few superfluous temporary variables along the
way.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We currently don't reuse http connections when fetching via
the smart-http protocol. This is bad because the TCP
handshake introduces latency, and especially because SSL
connection setup may be non-trivial.
We can fix it by consistently using curl's "multi"
interface. The reason is rather complicated:
Our http code has two ways of being used: queuing many
"slots" to be fetched in parallel, or fetching a single
request in a blocking manner. The parallel code is built on
curl's "multi" interface. Most of the single-request code
uses http_request, which is built on top of the parallel
code (we just feed it one slot, and wait until it finishes).
However, one could also accomplish the single-request scheme
by avoiding curl's multi interface entirely and just using
curl_easy_perform. This is simpler, and is used by post_rpc
in the smart-http protocol.
It does work to use the same curl handle in both contexts,
as long as it is not at the same time. However, internally
curl may not share all of the cached resources between both
contexts. In particular, a connection formed using the
"multi" code will go into a reuse pool connected to the
"multi" object. Further requests using the "easy" interface
will not be able to reuse that connection.
The smart http protocol does ref discovery via http_request,
which uses the "multi" interface, and then follows up with
the "easy" interface for its rpc calls. As a result, we make
two HTTP connections rather than reusing a single one.
We could teach the ref discovery to use the "easy"
interface. But it is only once we have done this discovery
that we know whether the protocol will be smart or dumb. If
it is dumb, then our further requests, which want to fetch
objects in parallel, will not be able to reuse the same
connection.
Instead, this patch switches post_rpc to build on the
parallel interface, which means that we use it consistently
everywhere. It's a little more complicated to use, but since
we have the infrastructure already, it doesn't add any code;
we can just factor out the relevant bits from http_request.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Remove a few duplicate implementations of prefix/suffix comparison
functions, and rename them to starts_with and ends_with.
* cc/starts-n-ends-with:
replace {pre,suf}fixcmp() with {starts,ends}_with()
strbuf: introduce starts_with() and ends_with()
builtin/remote: remove postfixcmp() and use suffixcmp() instead
environment: normalize use of prefixcmp() by removing " != 0"
Leaving only the function definitions and declarations so that any
new topic in flight can still make use of the old functions, replace
existing uses of the prefixcmp() and suffixcmp() with new API
functions.
The change can be recreated by mechanically applying this:
$ git grep -l -e prefixcmp -e suffixcmp -- \*.c |
grep -v strbuf\\.c |
xargs perl -pi -e '
s|!prefixcmp\(|starts_with\(|g;
s|prefixcmp\(|!starts_with\(|g;
s|!suffixcmp\(|ends_with\(|g;
s|suffixcmp\(|!ends_with\(|g;
'
on the result of preparatory changes in this series.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Issue "100 Continue" responses to help use of GSS-Negotiate
authentication scheme over HTTP transport when needed.
* bc/http-100-continue:
remote-curl: fix large pushes with GSSAPI
remote-curl: pass curl slot_results back through run_slot
http: return curl's AUTHAVAIL via slot_results
Callers of the http code may want to know which auth types
were available for the previous request. But after finishing
with the curl slot, they are not supposed to look at the
curl handle again. We already handle returning other
information via the slot_results struct; let's add a flag to
check the available auth.
Note that older versions of curl did not support this, so we
simply return 0 (something like "-1" would be worse, as the
value is a bitflag and we might accidentally set a flag).
This is sufficient for the callers planned in this series,
who only trigger some optional behavior if particular bits
are set, and can live with a fake "no bits" answer.
Signed-off-by: Jeff King <peff@peff.net>
Handle the case where http transport gets redirected during the
authorization request better.
* jk/http-auth-redirects:
http.c: Spell the null pointer as NULL
remote-curl: rewrite base url from info/refs redirects
remote-curl: store url as a strbuf
remote-curl: make refs_url a strbuf
http: update base URLs when we see redirects
http: provide effective url to callers
http: hoist credential request out of handle_curl_result
http: refactor options to http_get_*
http_request: factor out curlinfo_strbuf
http_get_file: style fixes
Commit 1bbcc224 ("http: refactor options to http_get_*", 28-09-2013)
changed the type of final 'options' argument of the http_get_file()
function from an int to an 'struct http_get_options' pointer.
However, it neglected to update the (single) call site. Since this
call was passing '0' to that argument, it was (correctly) being
interpreted as a null pointer. Change to argument to NULL.
Noticed by sparse. ("Using plain integer as NULL pointer")
Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk>
Acked-by: Jeff King <peff@peff.net>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit a15d069 taught git to use curl's SOCKOPTFUNCTION hook
to turn on TCP keepalives. However, modern versions of curl
have a TCP_KEEPALIVE option, which can do this for us. As an
added bonus, the curl code knows how to turn on keepalive
for a much wider variety of platforms. The only downside to
using this option is that not everybody has a new enough curl.
Let's split our keepalive options into three conditionals:
1. With curl 7.25.0 and newer, we rely on curl to do it
right.
2. With older curl that still knows SOCKOPTFUNCTION, we
use the code from a15d069.
3. Otherwise, we are out of luck, and the call is a no-op.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If a caller asks the http_get_* functions to go to a
particular URL and we end up elsewhere due to a redirect,
the effective_url field can tell us where we went.
It would be nice to remember this redirect and short-cut
further requests for two reasons:
1. It's more efficient. Otherwise we spend an extra http
round-trip to the server for each subsequent request,
just to get redirected.
2. If we end up with an http 401 and are going to ask for
credentials, it is to feed them to the redirect target.
If the redirect is an http->https upgrade, this means
our credentials may be provided on the http leg, just
to end up redirected to https. And if the redirect
crosses server boundaries, then curl will drop the
credentials entirely as it follows the redirect.
However, it, it is not enough to simply record the effective
URL we saw and use that for subsequent requests. We were
originally fed a "base" url like:
http://example.com/foo.git
and we want to figure out what the new base is, even though
the URLs we see may be:
original: http://example.com/foo.git/info/refs
effective: http://example.com/bar.git/info/refs
Subsequent requests will not be for "info/refs", but for
other paths relative to the base. We must ask the caller to
pass in the original base, and we must pass the redirected
base back to the caller (so that it can generate more URLs
from it). Furthermore, we need to feed the new base to the
credential code, so that requests to credential helpers (or
to the user) match the URL we will be requesting.
This patch teaches http_request_reauth to do this munging.
Since it is the caller who cares about making more URLs, it
seems at first glance that callers could simply check
effective_url themselves and handle it. However, since we
need to update the credential struct before the second
re-auth request, we have to do it inside http_request_reauth.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
When we ask curl to access a URL, it may follow one or more
redirects to reach the final location. We have no idea
this has happened, as curl takes care of the details and
simply returns the final content to us.
The final URL that we ended up with can be accessed via
CURLINFO_EFFECTIVE_URL. Let's make that optionally available
to callers of http_get_*, so that they can make further
decisions based on the redirection.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
When we are handling a curl response code in http_request or
in the remote-curl RPC code, we use the handle_curl_result
helper to translate curl's response into an easy-to-use
code. When we see an HTTP 401, we do one of two things:
1. If we already had a filled-in credential, we mark it as
rejected, and then return HTTP_NOAUTH to indicate to
the caller that we failed.
2. If we didn't, then we ask for a new credential and tell
the caller HTTP_REAUTH to indicate that they may want
to try again.
Rejecting in the first case makes sense; it is the natural
result of the request we just made. However, prompting for
more credentials in the second step does not always make
sense. We do not know for sure that the caller is going to
make a second request, and nor are we sure that it will be
to the same URL. Logically, the prompt belongs not to the
request we just finished, but to the request we are (maybe)
about to make.
In practice, it is very hard to trigger any bad behavior.
Currently, if we make a second request, it will always be to
the same URL (even in the face of redirects, because curl
handles the redirects internally). And we almost always
retry on HTTP_REAUTH these days. The one exception is if we
are streaming a large RPC request to the server (e.g., a
pushed packfile), in which case we cannot restart. It's
extremely unlikely to see a 401 response at this stage,
though, as we would typically have seen it when we sent a
probe request, before streaming the data.
This patch drops the automatic prompt out of case 2, and
instead requires the caller to do it. This is a few extra
lines of code, and the bug it fixes is unlikely to come up
in practice. But it is conceptually cleaner, and paves the
way for better handling of credentials across redirects.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
This is a follow up to commit e47a8583 (enable SO_KEEPALIVE for
connected TCP sockets, 2011-12-06).
Sockets may never receive notification of some link errors,
causing "git fetch" or similar processes to hang forever.
Enabling keepalive messages allows hung processes to error out
after a few minutes/hours depending on the keepalive settings of
the system.
I noticed this problem with some non-interactive cronjobs getting
hung when talking to HTTP servers.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Over time, the http_get_strbuf function has grown several
optional parameters. We now have a bitfield with multiple
boolean options, as well as an optional strbuf for returning
the content-type of the response. And a future patch in this
series is going to add another strbuf option.
Treating these as separate arguments has a few downsides:
1. Most call sites need to add extra NULLs and 0s for the
options they aren't interested in.
2. The http_get_* functions are actually wrappers around
2 layers of low-level implementation functions. We have
to pass these options through individually.
3. The http_get_strbuf wrapper learned these options, but
nobody bothered to do so for http_get_file, even though
it is backed by the same function that does understand
the options.
Let's consolidate the options into a single struct. For the
common case of the default options, we'll allow callers to
simply pass a NULL for the options struct.
The resulting code is often a few lines longer, but it ends
up being easier to read (and to change as we add new
options, since we do not need to update each call site).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
When we retrieve the content-type of an http response, curl
gives us a pointer to internal storage, which we then copy
into a strbuf. Let's factor out the get-and-copy routine,
which can be used for getting other curl info.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Besides being ugly, the extra parentheses are idiomatic for
suppressing compiler warnings when we are assigning within a
conditional. We aren't doing that here, and they just
confuse the reader.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Allow section.<urlpattern>.var configuration variables to be
treated as a "virtual" section.var given a URL, and use the
mechanism to enhance http.* configuration variables.
This is a reroll of Kyle J. McKay's work.
* jc/url-match:
builtin/config.c: compilation fix
config: "git config --get-urlmatch" parses section.<url>.key
builtin/config: refactor collect_config()
config: parse http.<url>.<variable> using urlmatch
config: add generic callback wrapper to parse section.<url>.key
config: add helper to normalize and match URLs
http.c: fix parsing of http.sslCertPasswordProtected variable
Use the urlmatch_config_entry() to wrap the underlying
http_options() two-level variable parser in order to set
http.<variable> to the value with the most specific URL in the
configuration.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Kyle J. McKay <mackyle@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The existing code triggers only when the configuration variable is
set to true. Once the variable is set to true in a more generic
configuration file (e.g. ~/.gitconfig), it cannot be overriden to
false in the repository specific one (e.g. .git/config).
Signed-off-by: Junio C Hamano <gitster@pobox.com>
HTTP servers may send Set-Cookie headers in a response and expect them
to be set on subsequent requests. By default, libcurl behavior is to
store such cookies in memory and reuse them across requests within a
single session. However, it may also make sense, depending on the
server and the cookies, to store them across sessions. Provide users
an option to enable this behavior, writing cookies out to the same
file specified in http.cookiefile.
Signed-off-by: Dave Borowitz <dborowitz@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Older cURL wanted piece of memory we call it with to be stable, but
we updated the auth material after handing it to a call.
* bc/http-keep-memory-given-to-curl:
http.c: don't rewrite the user:passwd string multiple times
Curl older than 7.17 (RHEL 4.X provides 7.12 and RHEL 5.X provides
7.15) requires that we manage any strings that we pass to it as
pointers. So, we really shouldn't be modifying this strbuf after we
have passed it to curl.
Our interaction with curl is currently safe (before or after this
patch) since the pointer that is passed to curl is never invalidated;
it is repeatedly rewritten with the same sequence of characters but
the strbuf functions never need to allocate a larger string, so the
same memory buffer is reused.
This "guarantee" of safety is somewhat subtle and could be overlooked
by someone who may want to add a more complex handling of the username
and password. So, let's stop modifying this strbuf after we have
passed it to curl, but also leave a note to describe the assumptions
that have been made about username/password lifetime and to draw
attention to the code.
Signed-off-by: Brandon Casey <drafnel@gmail.com>
Acked-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Because we reuse curl handles for multiple requests, the
setup of a handle happens in two stages: stable, global
setup and per-request setup. The lifecycle of a handle is
something like:
1. get_curl_handle; do basic global setup that will last
through the whole program (e.g., setting the user
agent, ssl options, etc)
2. get_active_slot; set up a per-request baseline (e.g.,
clearing the read/write functions, making it a GET
request, etc)
3. perform the request with curl_*_perform functions
4. goto step 2 to perform another request
Breaking it down this way means we can avoid doing global
setup from step (1) repeatedly, but we still finish step (2)
with a predictable baseline setup that callers can rely on.
Until commit 6d052d7 (http: add HTTP_KEEP_ERROR option,
2013-04-05), setting curl's FAILONERROR option was a global
setup; we never changed it. However, 6d052d7 introduced an
option where some requests might turn off FAILONERROR. Later
requests using the same handle would have the option
unexpectedly turned off, which meant they would not notice
http failures at all.
This could easily be seen in the test-suite for the
"half-auth" cases of t5541 and t5551. The initial requests
turned off FAILONERROR, which meant it was erroneously off
for the rpc POST. That worked fine for a successful request,
but meant that we failed to react properly to the HTTP 401
(instead, we treated whatever the server handed us as a
successful message body).
The solution is simple: now that FAILONERROR is a
per-request setting, we move it to get_active_slot to make
sure it is reset for each request.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a boolean http.sslTry option which allows to enable AUTH SSL/TLS and
encrypted data transfers when connecting via regular FTP protocol.
Default is false since it might trigger certificate verification errors on
misconfigured servers.
Signed-off-by: Modestas Vainius <modestas@vainius.eu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This function is a single-liner and is only called from one
place. Just inline it, which makes the code more obvious.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we report an http error code, we say something like:
error: The requested URL reported failure: 403 Forbidden while accessing http://example.com/repo.git
Everything between "error:" and "while" is written by curl,
and the resulting sentence is hard to read (especially
because there is no punctuation between curl's sentence and
the remainder of ours). Instead, let's re-order this to give
better flow:
error: unable to access 'http://example.com/repo.git: The requested URL reported failure: 403 Forbidden
This is still annoyingly long, but at least reads more
clearly left to right.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>