Commit Graph

400 Commits

Author SHA1 Message Date
Junio C Hamano
c69f2f8c86 Merge branch 'cs/http-use-basic-after-failed-negotiate'
Regression fix for a change made during this cycle.

* cs/http-use-basic-after-failed-negotiate:
  Revert "remote-curl: fall back to basic auth if Negotiate fails"
  t5551: test http interaction with credential helpers
2021-05-21 05:49:41 +09:00
Jeff King
ecf7b129fa Revert "remote-curl: fall back to basic auth if Negotiate fails"
This reverts commit 1b0d9545bb.

That commit does fix the situation it intended to (avoiding Negotiate
even when the credentials were provided in the URL), but it creates a
more serious regression: we now never hit the conditional for "we had a
username and password, tried them, but the server still gave us a 401".
That has two bad effects:

 1. we never call credential_reject(), and thus a bogus credential
    stored by a helper will live on forever

 2. we never return HTTP_NOAUTH, so the error message the user gets is
    "The requested URL returned error: 401", instead of "Authentication
    failed".

Doing this correctly seems non-trivial, as we don't know whether the
Negotiate auth was a problem. Since this is a regression in the upcoming
v2.23.0 release (for which we're in -rc0), let's revert for now and work
on a fix separately.

(Note that this isn't a pure revert; the previous commit added a test
showing the regression, so we can now flip it to expect_success).

Reported-by: Ben Humphreys <behumphreys@atlassian.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-19 10:09:58 +09:00
brian m. carlson
5951bf467e Use the final_oid_fn to finalize hashing of object IDs
When we're hashing a value which is going to be an object ID, we want to
zero-pad that value if necessary.  To do so, use the final_oid_fn
instead of the final_fn anytime we're going to create an object ID to
ensure we perform this operation.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-27 16:31:38 +09:00
Junio C Hamano
5013802862 Merge branch 'cs/http-use-basic-after-failed-negotiate'
When accessing a server with a URL like https://user:pass@site/, we
did not to fall back to the basic authentication with the
credential material embedded in the URL after the "Negotiate"
authentication failed.  Now we do.

* cs/http-use-basic-after-failed-negotiate:
  remote-curl: fall back to basic auth if Negotiate fails
2021-03-30 14:35:37 -07:00
Junio C Hamano
8c81fce4b0 Merge branch 'js/http-pki-credential-store'
The http codepath learned to let the credential layer to cache the
password used to unlock a certificate that has successfully been
used.

* js/http-pki-credential-store:
  http: drop the check for an empty proxy password before approving
  http: store credential when PKI auth is used
2021-03-26 14:59:02 -07:00
Christopher Schenk
1b0d9545bb remote-curl: fall back to basic auth if Negotiate fails
When the username and password are supplied in a url like this
https://myuser:secret@git.exampe/myrepo.git and the server supports the
negotiate authenticaten method, git does not fall back to basic auth and
libcurl hardly tries to authenticate with the negotiate method.

Stop using the Negotiate authentication method after the first failure
because if it fails on the first try it will never succeed.

Signed-off-by: Christopher Schenk <christopher@cschenk.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-22 11:55:41 -07:00
René Scharfe
ca56dadb4b use CALLOC_ARRAY
Add and apply a semantic patch for converting code that open-codes
CALLOC_ARRAY to use it instead.  It shortens the code and infers the
element size automatically.

Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-13 16:00:09 -08:00
John Szakmeister
a4a4439fdf http: drop the check for an empty proxy password before approving
credential_approve() already checks for a non-empty password before
saving, so there's no need to do the extra check here.

Signed-off-by: John Szakmeister <john@szakmeister.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-11 22:17:10 -08:00
John Szakmeister
cd27f604e4 http: store credential when PKI auth is used
We already looked for the PKI credentials in the credential store, but
failed to approve it on success.  Meaning, the PKI certificate password
was never stored and git would request it on every connection to the
remote.  Let's complete the chain by storing the certificate password on
success.

Likewise, we also need to reject the credential when there is a failure.
Curl appears to report client-related certificate issues are reported
with the CURLE_SSL_CERTPROBLEM error.  This includes not only a bad
password, but potentially other client certificate related problems.
Since we cannot get more information from curl, we'll go ahead and
reject the credential upon receiving that error, just to be safe and
avoid caching or saving a bad password.

Signed-off-by: John Szakmeister <john@szakmeister.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-11 22:17:07 -08:00
Jonathan Tan
726b25a91b http: allow custom index-pack args
Currently, when fetching, packfiles referenced by URIs are run through
index-pack without any arguments other than --stdin and --keep, no
matter what arguments are used for the packfile that is inline in the
fetch response. As a preparation for ensuring that all packs (whether
inline or not) use the same index-pack arguments, teach the http
subsystem to allow custom index-pack arguments.

http-fetch has been updated to use the new API. For now, it passes
--keep alone instead of --keep with a process ID, but this is only
temporary because http-fetch itself will be taught to accept index-pack
parameters (instead of using a hardcoded constant) in a subsequent
commit.

Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-02-22 12:07:40 -08:00
Jeff King
f6d8942b1f strvec: fix indentation in renamed calls
Code which split an argv_array call across multiple lines, like:

  argv_array_pushl(&args, "one argument",
                   "another argument", "and more",
		   NULL);

was recently mechanically renamed to use strvec, which results in
mis-matched indentation like:

  strvec_pushl(&args, "one argument",
                   "another argument", "and more",
		   NULL);

Let's fix these up to align the arguments with the opening paren. I did
this manually by sifting through the results of:

  git jump grep 'strvec_.*,$'

and liberally applying my editor's auto-format. Most of the changes are
of the form shown above, though I also normalized a few that had
originally used a single-tab indentation (rather than our usual style of
aligning with the open paren). I also rewrapped a couple of obvious
cases (e.g., where previously too-long lines became short enough to fit
on one), but I wasn't aggressive about it. In cases broken to three or
more lines, the grouping of arguments is sometimes meaningful, and it
wasn't worth my time or reviewer time to ponder each case individually.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-28 15:02:18 -07:00
Jeff King
ef8d7ac42a strvec: convert more callers away from argv_array name
We eventually want to drop the argv_array name and just use strvec
consistently. There's no particular reason we have to do it all at once,
or care about interactions between converted and unconverted bits.
Because of our preprocessor compat layer, the names are interchangeable
to the compiler (so even a definition and declaration using different
names is OK).

This patch converts remaining files from the first half of the alphabet,
to keep the diff to a manageable size.

The conversion was done purely mechanically with:

  git ls-files '*.c' '*.h' |
  xargs perl -i -pe '
    s/ARGV_ARRAY/STRVEC/g;
    s/argv_array/strvec/g;
  '

and then selectively staging files with "git add '[abcdefghjkl]*'".
We'll deal with any indentation/style fallouts separately.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-28 15:02:18 -07:00
Junio C Hamano
34e849b05a Merge branch 'jt/cdn-offload'
The "fetch/clone" protocol has been updated to allow the server to
instruct the clients to grab pre-packaged packfile(s) in addition
to the packed object data coming over the wire.

* jt/cdn-offload:
  upload-pack: fix a sparse '0 as NULL pointer' warning
  upload-pack: send part of packfile response as uri
  fetch-pack: support more than one pack lockfile
  upload-pack: refactor reading of pack-objects out
  Documentation: add Packfile URIs design doc
  Documentation: order protocol v2 sections
  http-fetch: support fetching packfiles by URL
  http-fetch: refactor into function
  http: refactor finish_http_pack_request()
  http: use --stdin when indexing dumb HTTP pack
2020-06-25 12:27:47 -07:00
Jonathan Tan
8d5d2a34df http-fetch: support fetching packfiles by URL
Teach http-fetch the ability to download packfiles directly, given a
URL, and to verify them.

The http_pack_request suite has been augmented with a function that
takes a URL directly. With this function, the hash is only used to
determine the name of the temporary file.

Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-06-10 18:06:34 -07:00
Jonathan Tan
eb05349247 http: refactor finish_http_pack_request()
finish_http_pack_request() does multiple tasks, including some
housekeeping on a struct packed_git - (1) closing its index, (2)
removing it from a list, and (3) installing it. These concerns are
independent of fetching a pack through HTTP: they are there only because
(1) the calling code opens the pack's index before deciding to fetch it,
(2) the calling code maintains a list of packfiles that can be fetched,
and (3) the calling code fetches it in order to make use of its objects
in the same process.

In preparation for a subsequent commit, which adds a feature that does
not need any of this housekeeping, remove (1), (2), and (3) from
finish_http_pack_request(). (2) and (3) are now done by a helper
function, and (1) is the responsibility of the caller (in this patch,
done closer to the point where the pack index is opened).

Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-06-10 18:06:34 -07:00
Jonathan Tan
9cb3cab560 http: use --stdin when indexing dumb HTTP pack
When Git fetches a pack using dumb HTTP, (among other things) it invokes
index-pack on a ".pack.temp" packfile, specifying the filename as an
argument.

A future commit will require the aforementioned invocation of index-pack
to also generate a "keep" file. To use this, we either have to use
index-pack's naming convention (because --keep requires the pack's
filename to end with ".pack") or to pass the pack through stdin. Of the
two, it is simpler to pass the pack through stdin.

Thus, teach http to pass --stdin to index-pack. As a bonus, the code is
now simpler.

Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-06-10 18:06:33 -07:00
Jonathan Tan
827e7d4da4 http: redact all cookies, teach GIT_TRACE_REDACT=0
In trace output (when GIT_TRACE_CURL is true), redact the values of all
HTTP cookies by default. Now that auth headers (since the implementation
of GIT_TRACE_CURL in 74c682d3c6 ("http.c: implement the GIT_TRACE_CURL
environment variable", 2016-05-24)) and cookie values (since this
commit) are redacted by default in these traces, also allow the user to
inhibit these redactions through an environment variable.

Since values of all cookies are now redacted by default,
GIT_REDACT_COOKIES (which previously allowed users to select individual
cookies to redact) now has no effect.

Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-06-05 15:05:04 -07:00
Jonathan Tan
7167a62b9e http, imap-send: stop using CURLOPT_VERBOSE
Whenever GIT_CURL_VERBOSE is set, teach Git to behave as if
GIT_TRACE_CURL=1 and GIT_TRACE_CURL_NO_DATA=1 is set, instead of setting
CURLOPT_VERBOSE.

This is to prevent inadvertent revelation of sensitive data. In
particular, GIT_CURL_VERBOSE redacts neither the "Authorization" header
nor any cookies specified by GIT_REDACT_COOKIES.

Unifying the tracing mechanism also has the future benefit that any
improvements to the tracing mechanism will benefit both users of
GIT_CURL_VERBOSE and GIT_TRACE_CURL, and we do not need to remember to
implement any improvement twice.

Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-11 11:18:01 -07:00
Junio C Hamano
048abe1751 Git 2.26.2
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE4fA2sf7nIh/HeOzvsLXohpav5ssFAl6dLM0ACgkQsLXohpav
 5suZdA//Uv6ZDNw49kOTXYgvUwZXGx5jISv3rErIDIvZHeVCIFOUPhdkIKAUHcEQ
 iPFsXCl4VTnBoaFXY0wQ1zYksTowY8EDa1X4sWE4bxipJq3tE2M6o7DInCOwgFkF
 CNsNDoMPz+4r/QmCxkLZmCIdgRQtrol6pttSYnmshnCLrlNPR+OOeGwzACd5Wkx/
 RcVSgfv9iBAIRoDeNep0pc3aQ/qpzFZ/PGOa4m1bR3QGsShnR5aLwsrFO3x11jFF
 MYBP1xrM5MmjMb2fHm2dOsLvVaqjeBj9nbacpWpn3ak3TdzuL9kP41klH2OoUVsp
 IpPuWS52cAHKYCIyb2EqvifM75EsEh/awxFZM//ZKA+GfMqxRC2DqPV9S9EJ9rdW
 Pnd+b3b6JYOtVhwjHW0gzk1FWIJ/MwZqMh9dXPEcAvYYcgAnH1lB9pNdzK9YlDGT
 BEcCKDthkw9B3azUn8uRaOFFhVQloQ7AGfAdmvedkIt9Xa2eFITE0nHPKNyNsM7c
 aG6ol5CNsR6kAHJjEMqrUPTeot3mvbvrTXaT2Qp24BWvTuc6LImvD3OttVcyfVOz
 j5H908VTaK2iq3Jd4bjTWsA2PpyfJRsgyow7bCSi6ZvBDowSvlWUFRdFi7fAvm9b
 Hiep0ar79l2p0VkpAa6Du6L3Dg1wydSIOQGdN1I4UAQZZsL3HvM=
 =qVLd
 -----END PGP SIGNATURE-----

Sync with 2.26.2
2020-04-19 22:05:56 -07:00
Jonathan Nieder
af6b65d45e Git 2.26.2
This merges up the security fix from v2.17.5.

Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2020-04-19 16:32:24 -07:00
Jonathan Nieder
7397ca3373 Git 2.25.4
This merges up the security fix from v2.17.5.

Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2020-04-19 16:31:07 -07:00
Jonathan Nieder
b86a4be245 Git 2.24.3
This merges up the security fix from v2.17.5.

Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2020-04-19 16:30:34 -07:00
Jonathan Nieder
c9808fa014 Git 2.22.4
This merges up the security fix from v2.17.5.

Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2020-04-19 16:30:19 -07:00
Jonathan Nieder
9206d27eb5 Git 2.21.3
This merges up the security fix from v2.17.5.

Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2020-04-19 16:30:08 -07:00
Jonathan Nieder
041bc65923 Git 2.20.4
This merges up the security fix from v2.17.5.

Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2020-04-19 16:28:57 -07:00
Jonathan Nieder
76b54ee9b9 Git 2.19.5
This merges up the security fix from v2.17.5.

Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2020-04-19 16:26:41 -07:00
Jonathan Nieder
ba6f0905fd Git 2.18.4
This merges up the security fix from v2.17.5.

Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2020-04-19 16:24:14 -07:00
Jeff King
24036686c4 credential: parse URL without host as empty host, not unset
We may feed a URL like "cert:///path/to/cert.pem" into the credential
machinery to get the key for a client-side certificate. That
credential has no hostname field, which is about to be disallowed (to
avoid confusion with protocols where a helper _would_ expect a
hostname).

This means as of the next patch, credential helpers won't work for
unlocking certs. Let's fix that by doing two things:

  - when we parse a url with an empty host, set the host field to the
    empty string (asking only to match stored entries with an empty
    host) rather than NULL (asking to match _any_ host).

  - when we build a cert:// credential by hand, similarly assign an
    empty string

It's the latter that is more likely to impact real users in practice,
since it's what's used for http connections. But we don't have good
infrastructure to test it.

The url-parsing version will help anybody using git-credential in a
script, and is easy to test.

Signed-off-by: Jeff King <peff@peff.net>
Reviewed-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2020-04-19 16:10:57 -07:00
Junio C Hamano
aaa625567a Merge branch 'js/https-proxy-config'
A handful of options to configure SSL when talking to proxies have
been added.

* js/https-proxy-config:
  http: add environment variable support for HTTPS proxies
  http: add client cert support for HTTPS proxies
2020-03-25 13:57:42 -07:00
Jorge Lopez Silva
af026519c9 http: add environment variable support for HTTPS proxies
Add 4 environment variables that can be used to configure the proxy
cert, proxy ssl key, the proxy cert password protected flag, and the
CA info for the proxy.

Documentation for the options was also updated.

Signed-off-by: Jorge Lopez Silva <jalopezsilva@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-03-05 12:26:14 -08:00
Jorge Lopez Silva
88238e02d5 http: add client cert support for HTTPS proxies
Git supports performing connections to HTTPS proxies, but we don't
support doing mutual authentication with them (through TLS).

Add the necessary options to be able to send a client certificate to
the HTTPS proxy.

A client certificate can provide an alternative way of authentication
instead of using 'ProxyAuthorization' or other more common methods of
authentication.  Libcurl supports this functionality already, so changes
are somewhat minimal. The feature is guarded by the first available
libcurl version that supports these options.

4 configuration options are added and documented, cert, key, cert
password protected and CA info. The CA info should be used to specify a
different CA path to validate the HTTPS proxy cert.

Signed-off-by: Jorge Lopez Silva <jalopezsilva@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-03-05 12:25:09 -08:00
René Scharfe
a91cc7fad0 strbuf: add and use strbuf_insertstr()
Add a function for inserting a C string into a strbuf.  Use it
throughout the source to get rid of magic string length constants and
explicit strlen() calls.

Like strbuf_addstr(), implement it as an inline function to avoid the
implicit strlen() calls to cause runtime overhead.

Helped-by: Taylor Blau <me@ttaylorr.com>
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-10 09:04:45 -08:00
Junio C Hamano
bad5ed39cd Merge branch 'cb/curl-use-xmalloc'
HTTP transport had possible allocator/deallocator mismatch, which
has been corrected.

* cb/curl-use-xmalloc:
  remote-curl: unbreak http.extraHeader with custom allocators
2019-12-01 09:04:33 -08:00
Johannes Schindelin
4d17fd253f remote-curl: unbreak http.extraHeader with custom allocators
In 93b980e58f (http: use xmalloc with cURL, 2019-08-15), we started to
ask cURL to use `xmalloc()`, and if compiled with nedmalloc, that means
implicitly a different allocator than the system one.

Which means that all of cURL's allocations and releases now _need_ to
use that allocator.

However, the `http_options()` function used `slist_append()` to add any
configured extra HTTP header(s) _before_ asking cURL to use `xmalloc()`,
and `http_cleanup()` would release them _afterwards_, i.e. in the
presence of custom allocators, cURL would attempt to use the wrong
allocator to release the memory.

A naïve attempt at fixing this would move the call to
`curl_global_init()` _before_ the config is parsed (i.e. before that
call to `slist_append()`).

However, that does not work, as we _also_ parse the config setting
`http.sslbackend` and if found, call `curl_global_sslset()` which *must*
be called before `curl_global_init()`, for details see:
https://curl.haxx.se/libcurl/c/curl_global_sslset.html

So let's instead make the config parsing entirely independent from
cURL's data structures. Incidentally, this deletes two more lines than
it introduces, which is nice.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-11-07 14:38:32 +09:00
Junio C Hamano
f0fcab6deb Merge branch 'mh/http-urlmatch-cleanup'
Leakfix.

* mh/http-urlmatch-cleanup:
  http: don't leak urlmatch_config.vars
2019-09-30 13:19:24 +09:00
Mike Hommey
7e92756751 http: don't leak urlmatch_config.vars
Signed-off-by: Mike Hommey <mh@glandium.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-08-26 11:10:09 -07:00
Matthew DeVore
c2694952e3 strbuf: give URL-encoding API a char predicate fn
Allow callers to specify exactly what characters need to be URL-encoded
and which do not. This new API will be taken advantage of in a patch
later in this set.

Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Matthew DeVore <matvore@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-28 08:41:53 -07:00
Mike Hommey
5c3d5a3823 Make fread/fwrite-like functions in http.c more like fread/fwrite.
The fread/fwrite-like functions in http.c, namely fread_buffer,
fwrite_buffer, fwrite_null, fwrite_sha1_file all return the
multiplication of the size and number of items they are being given.

Practically speaking, it doesn't matter, because in all contexts where
those functions are used, size is 1.

But those functions being similar to fread and fwrite (the curl API is
designed around being able to use fread and fwrite directly), it might
be preferable to make them behave like fread and fwrite, which, from
the fread/fwrite manual page, is:
   On  success, fread() and fwrite() return the number of items read
   or written.  This number equals the number of bytes transferred
   only when size is 1.  If an error occurs, or the end of the file
   is reached, the return value is a short item count (or zero).

Signed-off-by: Mike Hommey <mh@glandium.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-05-08 12:15:25 +09:00
Junio C Hamano
d4e568b2a3 Merge branch 'bc/hash-transition-16'
Conversion from unsigned char[20] to struct object_id continues.

* bc/hash-transition-16: (35 commits)
  gitweb: make hash size independent
  Git.pm: make hash size independent
  read-cache: read data in a hash-independent way
  dir: make untracked cache extension hash size independent
  builtin/difftool: use parse_oid_hex
  refspec: make hash size independent
  archive: convert struct archiver_args to object_id
  builtin/get-tar-commit-id: make hash size independent
  get-tar-commit-id: parse comment record
  hash: add a function to lookup hash algorithm by length
  remote-curl: make hash size independent
  http: replace sha1_to_hex
  http: compute hash of downloaded objects using the_hash_algo
  http: replace hard-coded constant with the_hash_algo
  http-walker: replace sha1_to_hex
  http-push: remove remaining uses of sha1_to_hex
  http-backend: allow 64-character hex names
  http-push: convert to use the_hash_algo
  builtin/pull: make hash-size independent
  builtin/am: make hash size independent
  ...
2019-04-25 16:41:17 +09:00
Junio C Hamano
776f3e1fb7 Merge branch 'jk/server-info-rabbit-hole'
Code clean-up around a much-less-important-than-it-used-to-be
update_server_info() funtion.

* jk/server-info-rabbit-hole:
  update_info_refs(): drop unused force parameter
  server-info: drop objdirlen pointer arithmetic
  server-info: drop nr_alloc struct member
  server-info: use strbuf to read old info/packs file
  server-info: simplify cleanup in parse_pack_def()
  server-info: fix blind pointer arithmetic
  http: simplify parsing of remote objects/info/packs
  packfile: fix pack basename computation
  midx: check both pack and index names for containment
  t5319: drop useless --buffer from cat-file
  t5319: fix bogus cat-file argument
  pack-revindex: open index if necessary
  packfile.h: drop extern from function declarations
2019-04-25 16:41:13 +09:00
Jeff King
ddc56d4710 http: simplify parsing of remote objects/info/packs
We can use skip_prefix() and parse_oid_hex() to continuously increment
our pointer, rather than dealing with magic numbers. This also fixes a
few small shortcomings:

  - if we see a line with the right prefix, suffix, and length, i.e.
    matching /P pack-.{40}.pack\n/, we'll interpret the middle part as
    hex without checking if it could be parsed. This could lead to us
    looking at uninitialized garbage in the hash array. In practice this
    means we'll just make a garbage request to the server which will
    fail, though it's interesting that a malicious server could convince
    us to leak 40 bytes of uninitialized stack to them.

  - the current code is picky about seeing a newline at the end of file,
    but we can easily be more liberal

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-04-16 16:58:21 +09:00
brian m. carlson
05dfc7cac4 http: replace sha1_to_hex
Since sha1_to_hex is limited to SHA-1, replace it with hash_to_hex.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-04-01 11:57:39 +09:00
brian m. carlson
eed0e60f02 http: compute hash of downloaded objects using the_hash_algo
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-04-01 11:57:38 +09:00
brian m. carlson
ae041a0f9a http: replace hard-coded constant with the_hash_algo
Replace a hard-coded 40 with a reference to the_hash_algo.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-04-01 11:57:38 +09:00
brian m. carlson
538b152324 object-store: rename and expand packed_git's sha1 member
This member is used to represent the pack checksum of the pack in
question.  Expand this member to be GIT_MAX_RAWSZ bytes in length so it
works with longer hashes and rename it to be "hash" instead of "sha1".
This transformation was made with a change to the definition and the
following semantic patch:

@@
struct packed_git *E1;
@@
- E1->sha1
+ E1->hash

@@
struct packed_git E1;
@@
- E1.sha1
+ E1.hash

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-04-01 11:57:38 +09:00
Jeff King
a3722bcbbd http: factor out curl result code normalization
We make some requests with CURLOPT_FAILONERROR and some without, and
then handle_curl_result() normalizes any failures to a uniform CURLcode.

There are some other code paths in the dumb-http walker which don't use
handle_curl_result(); let's pull the normalization into its own function
so it can be reused.

Arguably those code paths would benefit from the rest of
handle_curl_result(), notably the auth handling. But retro-fitting it
now would be a lot of work, and in practice it doesn't matter too much
(whatever authentication we needed to make the initial contact with the
server is generally sufficient for the rest of the dumb-http requests).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-03-24 21:22:40 +09:00
Junio C Hamano
cba595ab1a Merge branch 'jk/loose-object-cache-oid'
Code clean-up.

* jk/loose-object-cache-oid:
  prefer "hash mismatch" to "sha1 mismatch"
  sha1-file: avoid "sha1 file" for generic use in messages
  sha1-file: prefer "loose object file" to "sha1 file" in messages
  sha1-file: drop has_sha1_file()
  convert has_sha1_file() callers to has_object_file()
  sha1-file: convert pass-through functions to object_id
  sha1-file: modernize loose header/stream functions
  sha1-file: modernize loose object file functions
  http: use struct object_id instead of bare sha1
  update comment references to sha1_object_info()
  sha1-file: fix outdated sha1 comment references
2019-02-06 22:05:27 -08:00
Junio C Hamano
99c0bdd09d Merge branch 'ms/http-no-more-failonerror'
Debugging help for http transport.

* ms/http-no-more-failonerror:
  test: test GIT_CURL_VERBOSE=1 shows an error
  remote-curl: unset CURLOPT_FAILONERROR
  remote-curl: define struct for CURLOPT_WRITEFUNCTION
  http: enable keep_error for HTTP requests
  http: support file handles for HTTP_KEEP_ERROR
2019-01-29 12:47:55 -08:00
Masaya Suzuki
e6cf87b12d http: enable keep_error for HTTP requests
curl stops parsing a response when it sees a bad HTTP status code and it
has CURLOPT_FAILONERROR set. This prevents GIT_CURL_VERBOSE to show HTTP
headers on error.

keep_error is an option to receive the HTTP response body for those
error responses. By enabling this option, curl will process the HTTP
response headers, and they're shown if GIT_CURL_VERBOSE is set.

Signed-off-by: Masaya Suzuki <masayasuzuki@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-10 15:00:56 -08:00
Masaya Suzuki
8dd2e88a92 http: support file handles for HTTP_KEEP_ERROR
HTTP_KEEP_ERROR makes it easy to debug HTTP transport errors. In order
to make HTTP_KEEP_ERROR enabled for all requests, file handles need to
be supported.

Signed-off-by: Masaya Suzuki <masayasuzuki@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-10 15:00:56 -08:00