Commit Graph

88 Commits

Author SHA1 Message Date
Junio C Hamano
063cd850f2 Merge branch 'jk/use-perl-path-consistently'
Tests had a few places where we ignored PERL_PATH and blindly used
/usr/bin/perl, which have been corrected.

* jk/use-perl-path-consistently:
  t/lib-httpd: pass PERL_PATH to CGI scripts
2023-04-11 13:49:13 -07:00
Jeff King
c1917156a0 t/lib-httpd: pass PERL_PATH to CGI scripts
As discussed in t/README, tests should aim to use PERL_PATH rather than
straight "perl". We usually do this automatically with a "perl" function
in test-lib.sh, but a few cases need to be handled specially.

One such case is the apply-one-time-perl.sh CGI, which invokes plain
"perl". It should be using $PERL_PATH, but to make that work, we must
also instruct Apache to pass through the variable.

Prior to this patch, doing:

  mv /usr/bin/perl /usr/bin/my-perl
  make PERL_PATH=/usr/bin/my-perl test

would fail t5702, t5703, and t5616. After this it passes. This is a
pretty extreme case, as even if you install perl elsewhere, you'd likely
still have it in your $PATH. A more realistic case is that you don't
want to use the perl in your $PATH (because it's older, broken, etc) and
expect PERL_PATH to consistently override that (since that's what it's
documented to do). Removing it completely is just a convenient way of
completely breaking it for testing purposes.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-04-06 09:29:43 -07:00
Junio C Hamano
92c56da096 Merge branch 'mc/credential-helper-www-authenticate'
Allow information carried on the WWW-AUthenticate header to be
passed to the credential helpers.

* mc/credential-helper-www-authenticate:
  credential: add WWW-Authenticate header to cred requests
  http: read HTTP WWW-Authenticate response headers
  t5563: add tests for basic and anoymous HTTP access
2023-03-17 14:03:10 -07:00
Junio C Hamano
b2893ea403 Merge branch 'jk/http-test-fixes'
Various fix-ups on HTTP tests.

* jk/http-test-fixes:
  t5559: make SSL/TLS the default
  t5559: fix test failures with LIB_HTTPD_SSL
  t/lib-httpd: enable HTTP/2 "h2" protocol, not just h2c
  t/lib-httpd: respect $HTTPD_PROTO in expect_askpass()
  t5551: drop curl trace lines without headers
  t5551: handle v2 protocol in cookie test
  t5551: simplify expected cookie file
  t5551: handle v2 protocol in upload-pack service test
  t5551: handle v2 protocol when checking curl trace
  t5551: stop forcing clone to run with v0 protocol
  t5551: handle HTTP/2 when checking curl trace
  t5551: lower-case headers in expected curl trace
  t5551: drop redundant grep for Accept-Language
  t5541: simplify and move "no empty path components" test
  t5541: stop marking "used receive-pack service" test as v0 only
  t5541: run "used receive-pack service" test earlier
2023-02-28 16:38:47 -08:00
Matthew John Cheetham
988aad99b4 t5563: add tests for basic and anoymous HTTP access
Add a test showing simple anoymous HTTP access to an unprotected
repository, that results in no credential helper invocations.
Also add a test demonstrating simple basic authentication with
simple credential helper support.

Leverage a no-parsed headers (NPH) CGI script so that we can directly
control the HTTP responses to simulate a multitude of good, bad and ugly
remote server implementations around auth.

Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-02-27 10:40:40 -08:00
Jeff King
3c14419c6b t/lib-httpd: enable HTTP/2 "h2" protocol, not just h2c
Commit 73c49a4474 (t: run t5551 tests with both HTTP and HTTP/2,
2022-11-11) added Apache config to enable HTTP/2. However, it only
enabled the "h2c" protocol, which allows cleartext HTTP/2 (generally
based on an upgrade header during an HTTP/1.1 request). This is what
t5559 is generally testing, since by default we don't set up SSL/TLS.

However, it should be possible to run t5559 with LIB_HTTPD_SSL set. In
that case, Apache will advertise support for HTTP/2 via ALPN during the
TLS handshake. But we need to tell it support "h2" (the non-cleartext
version) to do so. Without that, then curl does not even try to do the
HTTP/1.1 upgrade (presumably because after seeing that we did TLS but
didn't get the ALPN indicator, it assumes it would be fruitless).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-02-23 13:01:17 -08:00
Jeff King
29ae2c9e74 add basic http proxy tests
We do not test our http proxy functionality at all in the test suite, so
this is a pretty big blind spot. Let's at least add a basic check that
we can go through an authenticating proxy to perform a clone.

A few notes on the implementation:

  - I'm using a single apache instance to proxy to itself. This seems to
    work fine in practice, and we can check with a test that this rather
    unusual setup is doing what we expect.

  - I've put the proxy tests into their own script, and it's the only
    one which loads the apache proxy config. If any platform can't
    handle this (e.g., doesn't have the right modules), the start_httpd
    step should fail and gracefully skip the rest of the script (but all
    the other http tests in existing scripts will continue to run).

  - I used a separate passwd file to make sure we don't ever get
    confused between proxy and regular auth credentials. It's using the
    antiquated crypt() format. This is a terrible choice security-wise
    in the modern age, but it's what our existing passwd file uses, and
    should be portable. It would probably be reasonable to switch both
    of these to bcrypt, but we can do that in a separate patch.

  - On the client side, we test two situations with credentials: when
    they are present in the url, and when the username is present but we
    prompt for the password. I think we should be able to handle the
    case that _neither_ is present, but an HTTP 407 causes us to prompt
    for them. However, this doesn't seem to work. That's either a bug,
    or at the very least an opportunity for a feature, but I punted on
    it for now. The point of this patch is just getting basic coverage,
    and we can explore possible deficiencies later.

  - this doesn't work with LIB_HTTPD_SSL. This probably would be
    valuable to have, as https over an http proxy is totally different
    (it uses CONNECT to tunnel the session). But adding in
    mod_proxy_connect and some basic config didn't seem to work for me,
    so I punted for now. Much of the rest of the test suite does not
    currently work with LIB_HTTPD_SSL either, so we shouldn't be making
    anything much worse here.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-02-16 16:24:23 -08:00
Jeff King
b08edf709d t/lib-httpd: increase ssl key size to 2048 bits
Recent versions of openssl will refuse to work with 1024-bit RSA keys,
as they are considered insecure. I didn't track down the exact version
in which the defaults were tightened, but the Debian-package openssl 3.0
on my system yields:

  $ LIB_HTTPD_SSL=1 ./t5551-http-fetch-smart.sh -v -i
  [...]
  SSL Library Error: error:0A00018F:SSL routines::ee key too small
  1..0 # SKIP web server setup failed

This could probably be overcome with configuration, but that's likely
to be a headache (especially if it requires touching /etc/openssl).
Let's just pick a key size that's less outrageously out of date.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-02-01 10:10:34 -08:00
Jeff King
d113449e26 t/lib-httpd: drop SSLMutex config
The SSL config enabled by setting LIB_HTTPD_SSL does not work with
Apache versions greater than 2.2, as more recent versions complain about
the SSLMutex directive. According to
https://httpd.apache.org/docs/current/upgrading.html:

  Directives AcceptMutex, LockFile, RewriteLock, SSLMutex,
  SSLStaplingMutex, and WatchdogMutexPath have been replaced with a
  single Mutex directive. You will need to evaluate any use of these
  removed directives in your 2.2 configuration to determine if they can
  just be deleted or will need to be replaced using Mutex.

Deleting this line will just use the system default, which seems
sensible. The original came as part of faa4bc35a0 (http-push: add
regression tests, 2008-02-27), but no specific reason is given there (or
on the mailing list) for its presence.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-02-01 10:10:34 -08:00
Jeff King
edd060dc84 t/lib-httpd: bump required apache version to 2.4
Apache 2.4 has been out since early 2012, almost 11 years. And its
predecessor, 2.2, has been out of support since its last release in
2017, over 5 years ago. The last mention on the mailing list was from
around the same time, in this thread:

  https://lore.kernel.org/git/20171231023234.21215-1-tmz@pobox.com/

We can probably assume that 2.4 is available everywhere. And the stakes
are fairly low, as the worst case is that such a platform would skip the
http tests.

This lets us clean up a few minor version checks in the config file, but
also revert f1f2b45be0 (tests: adjust the configuration for Apache 2.2,
2016-05-09). Its technique isn't _too_ bad, but certainly required a bit
more explanation than the 2.4 version it replaced. I manually confirmed
that the test in t5551 still behaves as expected (if you replace
"cadabra" with "foo", the server correctly rejects the request).

It will also help future patches which will no longer have to deal with
conditional config for this old version.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-02-01 10:10:34 -08:00
Jeff King
d762617079 t/lib-httpd: bump required apache version to 2.2
Apache 2.2 was released in 2005, almost 18 years ago. We can probably
assume that people are running a version at least that old (and the
stakes for removing it are fairly low, as the worst case is that they
would not run the http tests against their ancient version).

Dropping support for the older versions cleans up the config file a
little, and will also enable us to bump the required version further
(with more cleanups) in a future patch.

Note that the file actually checks for version 2.1. In apache's
versioning scheme, odd numbered versions are for development and even
numbers are for stable releases. So 2.1 and 2.2 are effectively the same
from our perspective.

Older versions would just fail to start, which would generally cause us
to skip the tests. However, we do have version detection code in
lib-httpd.sh which produces a nicer error message, so let's update that,
too. I didn't bother handling the case of "3.0", etc. Apache has been on
2.x for 21 years, with no signs of bumping the major version.  And if
they eventually do, I suspect there will be enough breaking changes that
we'd need to update more than just the numeric version check. We can
worry about that hypothetical when it happens.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-02-01 10:10:34 -08:00
Jeff King
73c49a4474 t: run t5551 tests with both HTTP and HTTP/2
We have occasionally seen bugs that affect Git running only against an
HTTP/2 web server, not an HTTP one. For instance, b66c77a64e (http:
match headers case-insensitively when redacting, 2021-09-22). But since
we have no test coverage using HTTP/2, we only uncover these bugs in the
wild.

That commit gives a recipe for converting our Apache setup to support
HTTP/2, but:

  - it's not necessarily portable

  - we don't want to just test HTTP/2; we really want to do a variety of
    basic tests for _both_ protocols

This patch handles both problems by running a duplicate of t5551
(labeled as t5559 here) with an alternate-universe setup that enables
HTTP/2. So we'll continue to run t5551 as before, but run the same
battery of tests again with HTTP/2. If HTTP/2 isn't supported on a given
platform, then t5559 should bail during the webserver setup, and
gracefully skip all tests (unless GIT_TEST_HTTPD has been changed from
"auto" to "yes", where the point is to complain when webserver setup
fails).

In theory other http-related test scripts could benefit from the same
duplication, but doing t5551 should give us a reasonable check of basic
functionality, and would have caught both bugs we've seen in the wild
with HTTP/2.

A few notes on the implementation:

  - a script enables the server side config by calling enable_http2
    before starting the webserver. This avoids even trying to load any
    HTTP/2 config for t5551 (which is what lets it keep working with
    regular HTTP even on systems that don't support it). This also sets
    a prereq which can be used by individual tests.

  - As discussed in b66c77a64e, the http2 module isn't compatible with
    the "prefork" mpm, so we need to pick something else. I chose
    "event" here, which works on my Debian system, but it's possible
    there are platforms which would prefer something else. We can adjust
    that later if somebody finds such a platform.

  - The test "large fetch-pack requests can be sent using chunked
    encoding" makes sure we use a chunked transfer-encoding by looking
    for that header in the trace. But since HTTP/2 has its own streaming
    mechanisms, we won't find such a header. We could skip the test
    entirely by marking it with !HTTP2. But there's some value in making
    sure that the fetch itself succeeded. So instead, we'll confirm that
    either we're using HTTP2 _or_ we saw the expected chunked header.

  - the redaction tests fail under HTTP/2 with recent versions of curl.
    This is a bug! I've marked them with !HTTP2 here to skip them under
    t5559 for the moment. Using test_expect_failure would be more
    appropriate, but would require a bunch of boilerplate. Since we'll
    be fixing them momentarily, let's just skip them for now to keep the
    test suite bisectable, and we can re-enable them in the commit that
    fixes the bug.

  - one alternative layout would be to push most of t5551 into a
    lib-t5551.sh script, then source it from both t5551 and t5559.
    Keeping t5551 intact seemed a little simpler, as its one less level
    of indirection for people fixing bugs/regressions in the non-HTTP/2
    tests.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-14 17:42:46 -05:00
René Scharfe
7a2d8ea47e t/lib-httpd: pass LANG and LC_ALL to Apache
t5411 starts a web server with no explicit language setting, so it uses
the system default.  Ten of its tests expect it to return error messages
containing the prefix "fatal: ", emitted by die().  This prefix can be
localized since a1fd2cf8cd (i18n: mark message helpers prefix for
translation, 2022-06-21), however.  As a result these ten tests break
for me on a system with LANG="de_DE.UTF-8" because the web server sends
localized messages with "Schwerwiegend: " instead of "fatal: ".

Fix these tests by passing LANG and LC_ALL to the web server, which are
set to "C" by t/test-lib.sh, to get untranslated messages on both sides.

Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-10-06 09:16:26 -07:00
Junio C Hamano
735907bde1 Merge branch 'jk/http-push-status-fix'
"git push" client talking to an HTTP server did not diagnose the
lack of the final status report from the other side correctly,
which has been corrected.

* jk/http-push-status-fix:
  transport-helper: recognize "expecting report" error from send-pack
  send-pack: complain about "expecting report" with --helper-status
2021-10-29 15:43:12 -07:00
Jeff King
e4c9538a9c send-pack: complain about "expecting report" with --helper-status
When pushing to a server which erroneously omits the final ref-status
report, the client side should complain about the refs for which we
didn't receive the status (because we can't just assume they were
updated). This works over most transports like ssh, but for http we'll
print a very misleading "Everything up-to-date".

It works for ssh because send-pack internally sets the status of each
ref to REF_STATUS_EXPECTING_REPORT, and then if the server doesn't tell
us about a particular ref, it will stay at that value. When we print the
final status table, we'll see that we're still on EXPECTING_REPORT and
complain then.

But for http, we go through remote-curl, which invokes send-pack with
"--stateless-rpc --helper-status". The latter option causes send-pack to
return a machine-readable list of ref statuses to the remote helper. But
ever since its inception in de1a2fdd38 (Smart push over HTTP: client
side, 2009-10-30), the send-pack code has simply omitted mention of any
ref which ended up in EXPECTING_REPORT.

In the remote helper, we then take the absence of any status report
from send-pack to mean that the ref was not even something we tried to
send, and thus it prints "Everything up-to-date". Fortunately it does
detect the eventual non-zero exit from send-pack, and propagates that in
its own non-zero exit code. So at least a careful script invoking "git
push" would notice the failure.  But sending the misleading message on
stderr is certainly confusing for humans (not to mention the
machine-readable "push --porcelain" output, though again, any careful
script should be checking the exit code from push, too).

Nobody seems to have noticed because the server in this instance has to
be misbehaving: it has promised to support the ref-status capability
(otherwise the client will not set EXPECTING_REPORT at all), but didn't
send us any. If the connection were simply cut, then send-pack would
complain about getting EOF while trying to read the status. But if the
server actually sends a flush packet (i.e., saying "now you have all of
the ref statuses" without actually sending any), then the client ends up
in this confused situation.

The fix is simple: we should return an error message from "send-pack
--helper-status", just like we would for any other error per-ref error
condition (in the test I included, the server simply omits all ref
status responses, but a more insidious version of this would skip only
some of them).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-10-18 13:26:52 -07:00
Jeff King
ff6a37c99e http-backend: handle HTTP_GIT_PROTOCOL CGI variable
When a client requests the v2 protocol over HTTP, they set the
Git-Protocol header. Webservers will generally make that available to
our CGI as HTTP_GIT_PROTOCOL in the environment. However, that's not
sufficient for upload-pack, etc, to respect it; they look in
GIT_PROTOCOL (without the HTTP_ prefix).

Either the webserver or the CGI is responsible for relaying that HTTP
header into the GIT_PROTOCOL variable. Traditionally, our tests have
configured the webserver to do so, but that's a burden on the server
admin. We can make this work out of the box by having the http-backend
CGI copy the contents of HTTP_GIT_PROTOCOL to GIT_PROTOCOL.

There are no new tests here. By removing the SetEnvIf line from our
test Apache config, we're now relying on this behavior of http-backend
to trigger the v2 protocol there (and there are numerous tests that fail
if this doesn't work).

There is one subtlety here: we copy HTTP_GIT_PROTOCOL only if there is
no existing GIT_PROTOCOL variable. That leaves the webserver admin free
to override the client's decision if they choose. This is unlikely to be
useful in practice, but is more flexible. And indeed, it allows the
v2-to-v0 fallback test added in the previous commit to continue working.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-10 15:34:59 -07:00
Jeff King
26146980f1 t5551: test v2-to-v0 http protocol fallback
Since we use the v2 protocol by default, the connection of a v2 client
to a v2 server is well covered by the test suite. And with the
GIT_TEST_PROTOCOL_VERSION knob, we can easily test a v0 client
connecting to a v2-aware server (which will then just speak v0). But we
have no regular tests that a v2 client, when encountering a non-v2-aware
server, will correctly fall back to using v0.

In theory this is a job for the cross-version tests in t/interop, but:

  - they cover only git:// and file:// clones

  - they are not part of the usual test suite, so nobody ever runs them
    anyway

Since using v2 over http requires configuring the web server to pass
along the Git-Protocol header, we can easily create a situation where
the server does not respect the v2 probe, and the conversation falls
back to v0.

This works just fine. This new test is not about fixing any particular
bug, but just making sure that the system works (and continues to work)
as expected.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-10 15:34:58 -07:00
Denton Liu
74b082ad34 remote-curl: error on incomplete packet
Currently, remote-curl acts as a proxy and blindly forwards packets
between an HTTP server and fetch-pack. In the case of a stateless RPC
connection where the connection is terminated with a partially written
packet, remote-curl will blindly send the partially written packet
before waiting on more input from fetch-pack. Meanwhile, fetch-pack will
read the partial packet and continue reading, expecting more input. This
results in a deadlock between the two processes.

For a stateless connection, inspect packets before sending them and
error out if a packet line packet is incomplete.

Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Denton Liu <liu.denton@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-24 16:26:00 -07:00
Johannes Schindelin
eafff6e41e t/lib-httpd: avoid using macOS' sed
Among other differences relative to GNU sed, macOS' sed always ends its
output with a trailing newline, even if the input did not have such a
trailing newline.

Surprisingly, this makes three httpd-based tests fail on macOS: t5616,
t5702 and t5703. ("Surprisingly" because those tests have been around
for some time, but apparently nobody runs them on macOS with a working
Apache2 setup.)

The reason is that we use `sed` in those tests to filter the response of
the web server. Apart from the fact that we use GNU constructs (such as
using a space after the `c` command instead of a backslash and a
newline), we have another problem: macOS' sed LF-only newlines while
webservers are supposed to use CR/LF ones.

Even worse, t5616 uses `sed` to replace a binary part of the response
with a new binary part (kind of hoping that the replaced binary part
does not contain a 0x0a byte which would be interpreted as a newline).

To that end, it calls on Perl to read the binary pack file and
hex-encode it, then calls on `sed` to prefix every hex digit pair with a
`\x` in order to construct the text that the `c` statement of the `sed`
invocation is supposed to insert. So we call Perl and sed to construct a
sed statement. The final nail in the coffin is that macOS' sed does not
even interpret those `\x<hex>` constructs.

Let's just replace all of that by Perl snippets. With Perl, at least, we
do not have to deal with GNU vs macOS semantics, we do not have to worry
about unwanted trailing newlines, and we do not have to spawn commands
to construct arguments for other commands to be spawned (i.e. we can
avoid a whole lot of shell scripting complexity).

The upshot is that this fixes t5616, t5702 and t5703 on macOS with
Apache2.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-27 09:58:41 -08:00
Denton Liu
77a946be98 apply-one-time-sed.sh: modernize style
Convert `[ ... ]` to use `test` and test for the existence of a regular
file (`-f`) instead of any file (`-e`).

Move the `then`s onto their own lines so that it conforms with the
general test style.

Instead of redirecting input into sed, allow it to open its own input.

Use `cmp -s` instead of `diff` since we only care about whether the two
files are equal and `diff` is overkill for this.

Signed-off-by: Denton Liu <liu.denton@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-11-29 13:20:14 -08:00
Jeff King
d9ef573837 t/lib-httpd: pass LSAN_OPTIONS through apache
Just as we instruct Apache to pass through ASAN_OPTIONS (so that
server-side Git programs it spawns will respect our options while
running the tests), we should do the same with LSAN_OPTIONS. Otherwise
trying to collect a list of leaks like:

    export LSAN_OPTIONS=exitcode=0:log_path=/tmp/lsan
    make SANITIZE=leak test

won't work for http tests (the server-side programs won't log their
leaks to the right place, and they'll prematurely die, producing a
spurious test failure).

Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Josh Steadmon <steadmon@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-05-08 13:21:23 +09:00
Josh Steadmon
30dea56536 t5551: test server-side ERR packet
When a smart HTTP server sends an error message via pkt-line, we detect
the error due to using PACKET_READ_DIE_ON_ERR_PACKET. This case was
added by 2d103c31c2 (pack-protocol.txt: accept error packets in any
context, 2018-12-29), but not covered by tests.

Signed-off-by: Josh Steadmon <steadmon@google.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-06 12:20:23 -08:00
Junio C Hamano
5f8b86db94 Merge branch 'jt/fetch-v2-sideband'
"git fetch" and "git upload-pack" learned to send all exchange over
the sideband channel while talking the v2 protocol.

* jt/fetch-v2-sideband:
  tests: define GIT_TEST_SIDEBAND_ALL
  {fetch,upload}-pack: sideband v2 fetch response
  sideband: reverse its dependency on pkt-line
  pkt-line: introduce struct packet_writer
  pack-protocol.txt: accept error packets in any context
  Use packet_reader instead of packet_read_line
2019-02-05 14:26:11 -08:00
Jonathan Tan
07c3c2aa16 tests: define GIT_TEST_SIDEBAND_ALL
Define a GIT_TEST_SIDEBAND_ALL environment variable meant to be used
from tests. When set to true, this overrides uploadpack.allowsidebandall
to true, allowing the entire test suite to be run as if this
configuration is in place for all repositories.

As of this patch, all tests pass whether GIT_TEST_SIDEBAND_ALL is unset
or set to 1.

Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-17 11:25:07 -08:00
Masaya Suzuki
e4871cd50c test: test GIT_CURL_VERBOSE=1 shows an error
This tests GIT_CURL_VERBOSE shows an error when an URL returns 500. This
exercises the code in remote_curl.

Signed-off-by: Masaya Suzuki <masayasuzuki@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-10 15:00:56 -08:00
Brandon Williams
3374292e55 upload-pack: test negotiation with changing repository
Add tests to check the behavior of fetching from a repository which
changes between rounds of negotiation (for example, when different
servers in a load-balancing agreement participate in the same stateless
RPC negotiation). This forms a baseline of comparison to the ref-in-want
functionality (which will be introduced to the client in subsequent
commits), and ensures that subsequent commits do not change existing
behavior.

As part of this effort, a mechanism to substitute strings in a single
HTTP response is added.

Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-06-28 09:33:29 -07:00
Todd Zullinger
a812952aab http: fix v1 protocol tests with apache httpd < 2.4
The apache config used by tests was updated to use the SetEnvIf
directive to set the Git-Protocol header in 19113a26b6 ("http: tell
server that the client understands v1", 2017-10-16).

Setting the Git-Protocol header is restricted to httpd >= 2.4, but
mod_setenvif and the SetEnvIf directive work with lower versions, at
least as far back as 2.0, according to the httpd documentation:

    https://httpd.apache.org/docs/2.0/mod/mod_setenvif.html

Drop the restriction.  Tested with httpd 2.2 and 2.4.

Signed-off-by: Todd Zullinger <tmz@pobox.com>
Acked-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-01-04 10:45:57 -08:00
Brandon Williams
19113a26b6 http: tell server that the client understands v1
Tell a server that protocol v1 can be used by sending the http header
'Git-Protocol' with 'version=1' indicating this.

Also teach the apache http server to pass through the 'Git-Protocol'
header as an environment variable 'GIT_PROTOCOL'.

Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-17 10:51:29 +09:00
Jonathan Tan
8e27391a5f http: attempt updating base URL only if no error
http.c supports HTTP redirects of the form

  http://foo/info/refs?service=git-upload-pack
  -> http://anything
  -> http://bar/info/refs?service=git-upload-pack

(that is to say, as long as the Git part of the path and the query
string is preserved in the final redirect destination, the intermediate
steps can have any URL). However, if one of the intermediate steps
results in an HTTP exception, a confusing "unable to update url base
from redirection" message is printed instead of a Curl error message
with the HTTP exception code.

This was introduced by 2 commits. Commit c93c92f ("http: update base
URLs when we see redirects", 2013-09-28) introduced a best-effort
optimization that required checking if only the "base" part of the URL
differed between the initial request and the final redirect destination,
but it performed the check before any HTTP status checking was done. If
something went wrong, the normal code path was still followed, so this
did not cause any confusing error messages until commit 6628eb4 ("http:
always update the base URL for redirects", 2016-12-06), which taught
http to die if the non-"base" part of the URL differed.

Therefore, teach http to check the HTTP status before attempting to
check if only the "base" part of the URL differed. This commit teaches
http_request_reauth to return early without updating options->base_url
upon an error; the only invoker of this function that passes a non-NULL
"options" is remote-curl.c (through "http_get_strbuf"), which only uses
options->base_url for an informational message in the situations that
this commit cares about (that is, when the return value is not HTTP_OK).

The included test checks that the redirect scheme at the beginning of
this commit message works, and that returning a 502 in the middle of the
redirect scheme produces the correct result. Note that this is different
from the test in commit 6628eb4 ("http: always update the base URL for
redirects", 2016-12-06) in that this commit tests that a Git-shaped URL
(http://.../info/refs?service=git-upload-pack) works, whereas commit
6628eb4 tests that a non-Git-shaped URL
(http://.../info/refs/foo?service=git-upload-pack) does not work (even
though Git is processing that URL) and is an error that is fatal, not
silently swallowed.

Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Acked-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-28 11:35:53 -08:00
Junio C Hamano
8a2882f23e Merge branch 'jk/http-walker-limit-redirect-2.9'
Transport with dumb http can be fooled into following foreign URLs
that the end user does not intend to, especially with the server
side redirects and http-alternates mechanism, which can lead to
security issues.  Tighten the redirection and make it more obvious
to the end user when it happens.

* jk/http-walker-limit-redirect-2.9:
  http: treat http-alternates like redirects
  http: make redirects more obvious
  remote-curl: rename shadowed options variable
  http: always update the base URL for redirects
  http: simplify update_url_from_redirect
2016-12-19 14:45:32 -08:00
Jeff King
50d3413740 http: make redirects more obvious
We instruct curl to always follow HTTP redirects. This is
convenient, but it creates opportunities for malicious
servers to create confusing situations. For instance,
imagine Alice is a git user with access to a private
repository on Bob's server. Mallory runs her own server and
wants to access objects from Bob's repository.

Mallory may try a few tricks that involve asking Alice to
clone from her, build on top, and then push the result:

  1. Mallory may simply redirect all fetch requests to Bob's
     server. Git will transparently follow those redirects
     and fetch Bob's history, which Alice may believe she
     got from Mallory. The subsequent push seems like it is
     just feeding Mallory back her own objects, but is
     actually leaking Bob's objects. There is nothing in
     git's output to indicate that Bob's repository was
     involved at all.

     The downside (for Mallory) of this attack is that Alice
     will have received Bob's entire repository, and is
     likely to notice that when building on top of it.

  2. If Mallory happens to know the sha1 of some object X in
     Bob's repository, she can instead build her own history
     that references that object. She then runs a dumb http
     server, and Alice's client will fetch each object
     individually. When it asks for X, Mallory redirects her
     to Bob's server. The end result is that Alice obtains
     objects from Bob, but they may be buried deep in
     history. Alice is less likely to notice.

Both of these attacks are fairly hard to pull off. There's a
social component in getting Mallory to convince Alice to
work with her. Alice may be prompted for credentials in
accessing Bob's repository (but not always, if she is using
a credential helper that caches). Attack (1) requires a
certain amount of obliviousness on Alice's part while making
a new commit. Attack (2) requires that Mallory knows a sha1
in Bob's repository, that Bob's server supports dumb http,
and that the object in question is loose on Bob's server.

But we can probably make things a bit more obvious without
any loss of functionality. This patch does two things to
that end.

First, when we encounter a whole-repo redirect during the
initial ref discovery, we now inform the user on stderr,
making attack (1) much more obvious.

Second, the decision to follow redirects is now
configurable. The truly paranoid can set the new
http.followRedirects to false to avoid any redirection
entirely. But for a more practical default, we will disallow
redirects only after the initial ref discovery. This is
enough to thwart attacks similar to (2), while still
allowing the common use of redirects at the repository
level. Since c93c92f30 (http: update base URLs when we see
redirects, 2013-09-28) we re-root all further requests from
the redirect destination, which should generally mean that
no further redirection is necessary.

As an escape hatch, in case there really is a server that
needs to redirect individual requests, the user can set
http.followRedirects to "true" (and this can be done on a
per-server basis via http.*.followRedirects config).

Reported-by: Jann Horn <jannh@google.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-06 12:32:48 -08:00
Jeff King
6628eb41db http: always update the base URL for redirects
If a malicious server redirects the initial ref
advertisement, it may be able to leak sha1s from other,
unrelated servers that the client has access to. For
example, imagine that Alice is a git user, she has access to
a private repository on a server hosted by Bob, and Mallory
runs a malicious server and wants to find out about Bob's
private repository.

Mallory asks Alice to clone an unrelated repository from her
over HTTP. When Alice's client contacts Mallory's server for
the initial ref advertisement, the server issues an HTTP
redirect for Bob's server. Alice contacts Bob's server and
gets the ref advertisement for the private repository. If
there is anything to fetch, she then follows up by asking
the server for one or more sha1 objects. But who is the
server?

If it is still Mallory's server, then Alice will leak the
existence of those sha1s to her.

Since commit c93c92f30 (http: update base URLs when we see
redirects, 2013-09-28), the client usually rewrites the base
URL such that all further requests will go to Bob's server.
But this is done by textually matching the URL. If we were
originally looking for "http://mallory/repo.git/info/refs",
and we got pointed at "http://bob/other.git/info/refs", then
we know that the right root is "http://bob/other.git".

If the redirect appears to change more than just the root,
we punt and continue to use the original server. E.g.,
imagine the redirect adds a URL component that Bob's server
will ignore, like "http://bob/other.git/info/refs?dummy=1".

We can solve this by aborting in this case rather than
silently continuing to use Mallory's server. In addition to
protecting from sha1 leakage, it's arguably safer and more
sane to refuse a confusing redirect like that in general.
For example, part of the motivation in c93c92f30 is
avoiding accidentally sending credentials over clear http,
just to get a response that says "try again over https". So
even in a non-malicious case, we'd prefer to err on the side
of caution.

The downside is that it's possible this will break a
legitimate but complicated server-side redirection scheme.
The setup given in the newly added test does work, but it's
convoluted enough that we don't need to care about it. A
more plausible case would be a server which redirects a
request for "info/refs?service=git-upload-pack" to just
"info/refs" (because it does not do smart HTTP, and for some
reason really dislikes query parameters).  Right now we
would transparently downgrade to dumb-http, but with this
patch, we'd complain (and the user would have to set
GIT_SMART_HTTP=0 to fetch).

Reported-by: Jann Horn <jannh@google.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-06 12:32:48 -08:00
Junio C Hamano
612c3dfb06 Merge branch 'ew/git-svn-http-tests'
Tests for "git svn" have been taught to reuse the lib-httpd test
infrastructure when testing the subversion integration that
interacts with subversion repositories served over the http://
protocol.

* ew/git-svn-http-tests:
  git svn: migrate tests to use lib-httpd
  t/t91*: do not say how to avoid the tests
2016-08-08 14:48:34 -07:00
Eric Wong
a8a5d25118 git svn: migrate tests to use lib-httpd
This allows us to use common test infrastructure and parallelize
the tests.  For now, GIT_SVN_TEST_HTTPD=true needs to be set to
enable the SVN HTTP tests because we reuse the same test cases
for both file:// and http:// SVN repositories.  SVN_HTTPD_PORT
is no longer honored.

Tested under Apache 2.2 and 2.4 on Debian 7.x (wheezy) and
8.x (jessie), respectively.

Cc: Clemens Buchacher <drizzd@aon.at>
Cc: Michael J Gruber <git@drmicha.warpmail.net>
Signed-off-by: Eric Wong <e@80x24.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-07-25 10:42:34 -07:00
Junio C Hamano
848b99b14e Merge branch 'js/http-custom-headers'
Update tests for "http.extraHeaders=<header>" to be portable back
to Apache 2.2 (the original depended on <RequireAll/> which is a
more recent feature).

* js/http-custom-headers:
  submodule: ensure that -c http.extraheader is heeded
  t5551: make the test for extra HTTP headers more robust
  tests: adjust the configuration for Apache 2.2
2016-05-17 14:38:35 -07:00
Johannes Schindelin
f1f2b45be0 tests: adjust the configuration for Apache 2.2
Lars Schneider noticed that the configuration introduced to test the
extra HTTP headers cannot be used with Apache 2.2 (which is still
actively maintained, as pointed out by Junio Hamano).

To let the tests pass with Apache 2.2 again, let's substitute the
offending <RequireAll> and `expr` by using old school RewriteCond
statements.

As RewriteCond does not allow testing for *non*-matches, we simply match
the desired case first and let it pass by marking the RewriteRule as
'[L]' ("last rule, do not process any other matching RewriteRules after
this"), and then have another RewriteRule that matches all other cases
and lets them fail via '[F]' ("fail").

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Tested-by: Lars Schneider <larsxschneider@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-05-10 10:27:42 -07:00
Junio C Hamano
e250f495b2 Merge branch 'js/http-custom-headers'
HTTP transport clients learned to throw extra HTTP headers at the
server, specified via http.extraHeader configuration variable.

* js/http-custom-headers:
  http: support sending custom HTTP headers
2016-05-06 14:45:43 -07:00
Johannes Schindelin
8cb01e2fd3 http: support sending custom HTTP headers
We introduce a way to send custom HTTP headers with all requests.

This allows us, for example, to send an extra token from build agents
for temporary access to private repositories. (This is the use case that
triggered this patch.)

This feature can be used like this:

	git -c http.extraheader='Secret: sssh!' fetch $URL $REF

Note that `curl_easy_setopt(..., CURLOPT_HTTPHEADER, ...)` takes only
a single list, overriding any previous call. This means we have to
collect _all_ of the headers we want to use into a single list, and
feed it to cURL in one shot. Since we already unconditionally set a
"pragma" header when initializing the curl handles, we can add our new
headers to that list.

For callers which override the default header list (like probe_rpc),
we provide `http_copy_default_headers()` so they can do the same
trick.

Big thanks to Jeff King and Junio Hamano for their outstanding help and
patient reviews.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-04-27 14:02:33 -07:00
Junio C Hamano
b5d7308a80 Merge branch 'jk/test-httpd-config-nosystem' into maint
The tests that involve running httpd leaked the system-wide
configuration in /etc/gitconfig to the tested environment.

* jk/test-httpd-config-nosystem:
  t/lib-httpd: pass through GIT_CONFIG_NOSYSTEM env
2016-04-14 18:37:17 -07:00
Junio C Hamano
3e95e47c5d Merge branch 'jk/test-httpd-config-nosystem'
The tests that involve running httpd leaked the system-wide
configuration in /etc/gitconfig to the tested environment.

* jk/test-httpd-config-nosystem:
  t/lib-httpd: pass through GIT_CONFIG_NOSYSTEM env
2016-04-06 11:39:08 -07:00
Jeff King
1fad5033ad t/lib-httpd: pass through GIT_CONFIG_NOSYSTEM env
We set GIT_CONFIG_NOSYSTEM in our test scripts so that we do
not accidentally read /etc/gitconfig and have it influence
the outcome of the tests. But when running smart-http tests,
Apache will clean the environment, including this variable,
and the "server" side of our http operations will read it.

You can see this breakage by doing something like:

  make
  ./git config --system http.getanyfile false
  make test

which will cause t5561 to fail when it tests the
fallback-to-dumb operation.

We can fix this by instructing Apache to pass through the
variable. Unlike with other variables (e.g., 89c57ab3's
GIT_TRACE), we don't need to set a dummy value to prevent
warnings from Apache. test-lib.sh already makes sure that
GIT_CONFIG_NOSYSTEM is set and exported.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-03-18 15:37:58 -07:00
Michael J Gruber
59223223f4 t/lib-httpd: load mod_unixd
In contrast to apache 2.2, apache 2.4 does not load mod_unixd in its
default configuration (because there are choices). Thus, with the
current config, apache 2.4.10 will not be started and the httpd tests
will not run on distros with default apache config (RedHat type).

Enable mod_unixd to make the httpd tests run. This does not affect
distros negatively which have that config already in their default
(Debian type). httpd tests will run on these before and after this patch.

Signed-off-by: Michael J Gruber <git@drmicha.warpmail.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-25 15:25:16 -08:00
Blake Burkhart
b258116462 http: limit redirection depth
By default, libcurl will follow circular http redirects
forever. Let's put a cap on this so that somebody who can
trigger an automated fetch of an arbitrary repository (e.g.,
for CI) cannot convince git to loop infinitely.

The value chosen is 20, which is the same default that
Firefox uses.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-09-25 15:32:28 -07:00
Blake Burkhart
f4113cac0c http: limit redirection to protocol-whitelist
Previously, libcurl would follow redirection to any protocol
it was compiled for support with. This is desirable to allow
redirection from HTTP to HTTPS. However, it would even
successfully allow redirection from HTTP to SFTP, a protocol
that git does not otherwise support at all. Furthermore
git's new protocol-whitelisting could be bypassed by
following a redirect within the remote helper, as it was
only enforced at transport selection time.

This patch limits redirects within libcurl to HTTP, HTTPS,
FTP and FTPS. If there is a protocol-whitelist present, this
list is limited to those also allowed by the whitelist. As
redirection happens from within libcurl, it is impossible
for an HTTP redirect to a protocol implemented within
another remote helper.

When the curl version git was compiled with is too old to
support restrictions on protocol redirection, we warn the
user if GIT_ALLOW_PROTOCOL restrictions were requested. This
is a little inaccurate, as even without that variable in the
environment, we would still restrict SFTP, etc, and we do
not warn in that case. But anything else means we would
literally warn every time git accesses an http remote.

This commit includes a test, but it is not as robust as we
would hope. It redirects an http request to ftp, and checks
that curl complained about the protocol, which means that we
are relying on curl's specific error message to know what
happened. Ideally we would redirect to a working ftp server
and confirm that we can clone without protocol restrictions,
and not with them. But we do not have a portable way of
providing an ftp server, nor any other protocol that curl
supports (https is the closest, but we would have to deal
with certificates).

[jk: added test and version warning]

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-09-25 15:30:39 -07:00
Jeff King
89c57ab3f0 t: pass GIT_TRACE through Apache
Apache removes GIT_TRACE from the environment before running
git-http-backend. This can make it hard to debug the server
side of an http session. Let's let it through.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-03-12 23:25:06 -07:00
Jeff King
b0f4c9087e t: support clang/gcc AddressSanitizer
When git is compiled with "-fsanitize=address" (using clang
or gcc >= 4.8), all invocations of git will check for buffer
overflows. This is similar to running with valgrind, except
that it is more thorough (because of the compiler support,
function-local buffers can be checked, too) and runs much
faster (making it much less painful to run the whole test
suite with the checks turned on).

Unlike valgrind, the magic happens at compile-time, so we
don't need the same infrastructure in the test suite that we
did to support --valgrind. But there are two things we can
help with:

  1. On some platforms, the leak-detector is on by default,
     and causes every invocation of "git init" (and thus
     every test script) to fail. Since running git with
     the leak detector is pointless, let's shut it off
     automatically in the tests, unless the user has already
     configured it.

  2. When apache runs a CGI, it clears the environment of
     unknown variables. This means that the $ASAN_OPTIONS
     config doesn't make it to git-http-backend, and it
     dies due to the leak detector. Let's mark the variable
     as OK for apache to pass.

With these two changes, running

    make CC=clang CFLAGS=-fsanitize=address test

works out of the box.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-12-11 14:13:17 -08:00
Junio C Hamano
0ea47f9d33 signed push: teach smart-HTTP to pass "git push --signed" around
The "--signed" option received by "git push" is first passed to the
transport layer, which the native transport directly uses to notice
that a push certificate needs to be sent.  When the transport-helper
is involved, however, the option needs to be told to the helper with
set_helper_option(), and the helper needs to take necessary action.
For the smart-HTTP helper, the "necessary action" involves spawning
the "git send-pack" subprocess with the "--signed" option.

Once the above all gets wired in, the smart-HTTP transport now can
use the push certificate mechanism to authenticate its pushes.

Add a test that is modeled after tests for the native transport in
t5534-push-signed.sh to t5541-http-push-smart.sh.  Update the test
Apache configuration to pass GNUPGHOME environment variable through.
As PassEnv would trigger warnings for an environment variable that
is not set, export it from test-lib.sh set to a harmless value when
GnuPG is not being used in the tests.

Note that the added test is deliberately loose and does not check
the nonce in this step.  This is because the stateless RPC mode is
inevitably flaky and a nonce that comes back in the actual push
processing is one issued by a different process; if the two
interactions with the server crossed a second boundary, the nonces
will not match and such a check will fail.  A later patch in the
series will work around this shortcoming.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-09-17 14:58:04 -07:00
Yi EungJun
f34a655d4d http: fix charset detection of extract_content_type()
extract_content_type() could not extract a charset parameter if the
parameter is not the first one and there is a whitespace and a following
semicolon just before the parameter. For example:

    text/plain; format=fixed ;charset=utf-8

And it also could not handle correctly some other cases, such as:

    text/plain; charset=utf-8; format=fixed
    text/plain; some-param="a long value with ;semicolons;"; charset=utf-8

Thanks-to: Jeff King <peff@peff.net>
Signed-off-by: Yi EungJun <eungjun.yi@navercorp.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-06-17 15:25:00 -07:00
Jeff King
fc1b774c72 remote-curl: reencode http error messages
We currently recognize an error message with a content-type
"text/plain; charset=utf-16" as text, but we ignore the
charset parameter entirely. Let's encode it to
log_output_encoding, which is presumably something the
user's terminal can handle.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-05-27 09:59:22 -07:00
Jeff King
bf197fd7ee http: extract type/subtype portion of content-type
When we get a content-type from curl, we get the whole
header line, including any parameters, and without any
normalization (like downcasing or whitespace) applied.
If we later try to match it with strcmp() or even
strcasecmp(), we may get false negatives.

This could cause two visible behaviors:

  1. We might fail to recognize a smart-http server by its
     content-type.

  2. We might fail to relay text/plain error messages to
     users (especially if they contain a charset parameter).

This patch teaches the http code to extract and normalize
just the type/subtype portion of the string. This is
technically passing out less information to the callers, who
can no longer see the parameters. But none of the current
callers cares, and a future patch will add back an
easier-to-use method for accessing those parameters.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-05-27 09:57:00 -07:00