git-commit-vandalism/t/t5551-http-fetch-smart.sh

708 lines
22 KiB
Bash
Raw Normal View History

test smart http fetch and push The top level directory "/smart/" of the test Apache server is mapped through our git-http-backend CGI, but uses the same underlying repository space as the server's document root. This is the most simple installation possible. Server logs are checked to verify the client has accessed only the smart URLs during the test. During fetch testing the headers are also logged from libcurl to ensure we are making a reasonably sane HTTP request, and getting back reasonably sane response headers from the CGI. When validating the request headers used during smart fetch we munge away the actual Content-Length and replace it with the placeholder "xxx". This avoids unnecessary varability in the test caused by an unrelated change in the requested capabilities in the first want line of the request. However, we still want to look for and verify that Content-Length was used, because smaller payloads should be using Content-Length and not "Transfer-Encoding: chunked". When validating the server response headers we must discard both Content-Length and Transfer-Encoding, as Apache2 can use either format to return our response. During development of this test I observed Apache returning both forms, depending on when the processes got CPU time. If our CGI returned the pack data quickly, Apache just buffered the whole thing and returned a Content-Length. If our CGI took just a bit too long to complete, Apache flushed its buffer and instead used "Transfer-Encoding: chunked". Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-31 01:47:47 +01:00
#!/bin/sh
t5551: handle HTTP/2 when checking curl trace We check that the curl trace of a clone has the lines we expect, but this won't work when we run the test under t5559, because a few details are different under HTTP/2 (but nobody noticed because it only happens when you manually set GIT_TEST_PROTOCOL_VERSION to "0"). We can handle both HTTP protocols with a few tweaks: - we'll drop the HTTP "101 Switching Protocols" response, as well as various protocol upgrade headers. These details aren't interesting to us. We just want to make sure the correct protocol was used (and we do in the main request/response lines). - successful HTTP/2 responses just say "200" and not "200 OK"; we can normalize these - replace HTTP/1.1 with a variable in the request/response lines. We can use the existing $HTTP_PROTO for this, as it's already set to "HTTP/2" when appropriate. We do need to tweak the fallback value to "HTTP/1.1" to match what curl will write (prior to this patch, the fallback value didn't matter at all; we only checked if it was the literal string "HTTP/2"). Note that several lines still expect HTTP/1.1 unconditionally. The first request does so because the client requests an upgrade during the request. The POST request and response do so because you can't do an upgrade if there is a request body. (This will all be different if we trigger HTTP/2 via ALPN, but the tests aren't yet capable of that). This is enough to let: GIT_TEST_PROTOCOL_VERSION=0 ./t5559-http-fetch-smart-http2.sh pass the "clone http repository" test (but there are some other failures later on). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-02-23 11:56:17 +01:00
: ${HTTP_PROTO:=HTTP/1.1}
t: run t5551 tests with both HTTP and HTTP/2 We have occasionally seen bugs that affect Git running only against an HTTP/2 web server, not an HTTP one. For instance, b66c77a64e (http: match headers case-insensitively when redacting, 2021-09-22). But since we have no test coverage using HTTP/2, we only uncover these bugs in the wild. That commit gives a recipe for converting our Apache setup to support HTTP/2, but: - it's not necessarily portable - we don't want to just test HTTP/2; we really want to do a variety of basic tests for _both_ protocols This patch handles both problems by running a duplicate of t5551 (labeled as t5559 here) with an alternate-universe setup that enables HTTP/2. So we'll continue to run t5551 as before, but run the same battery of tests again with HTTP/2. If HTTP/2 isn't supported on a given platform, then t5559 should bail during the webserver setup, and gracefully skip all tests (unless GIT_TEST_HTTPD has been changed from "auto" to "yes", where the point is to complain when webserver setup fails). In theory other http-related test scripts could benefit from the same duplication, but doing t5551 should give us a reasonable check of basic functionality, and would have caught both bugs we've seen in the wild with HTTP/2. A few notes on the implementation: - a script enables the server side config by calling enable_http2 before starting the webserver. This avoids even trying to load any HTTP/2 config for t5551 (which is what lets it keep working with regular HTTP even on systems that don't support it). This also sets a prereq which can be used by individual tests. - As discussed in b66c77a64e, the http2 module isn't compatible with the "prefork" mpm, so we need to pick something else. I chose "event" here, which works on my Debian system, but it's possible there are platforms which would prefer something else. We can adjust that later if somebody finds such a platform. - The test "large fetch-pack requests can be sent using chunked encoding" makes sure we use a chunked transfer-encoding by looking for that header in the trace. But since HTTP/2 has its own streaming mechanisms, we won't find such a header. We could skip the test entirely by marking it with !HTTP2. But there's some value in making sure that the fetch itself succeeded. So instead, we'll confirm that either we're using HTTP2 _or_ we saw the expected chunked header. - the redaction tests fail under HTTP/2 with recent versions of curl. This is a bug! I've marked them with !HTTP2 here to skip them under t5559 for the moment. Using test_expect_failure would be more appropriate, but would require a bunch of boilerplate. Since we'll be fixing them momentarily, let's just skip them for now to keep the test suite bisectable, and we can re-enable them in the commit that fixes the bug. - one alternative layout would be to push most of t5551 into a lib-t5551.sh script, then source it from both t5551 and t5559. Keeping t5551 intact seemed a little simpler, as its one less level of indirection for people fixing bugs/regressions in the non-HTTP/2 tests. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-11 23:35:05 +01:00
test_description="test smart fetching over http via http-backend ($HTTP_PROTO)"
GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME=main
tests: mark tests relying on the current default for `init.defaultBranch` In addition to the manual adjustment to let the `linux-gcc` CI job run the test suite with `master` and then with `main`, this patch makes sure that GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME is set in all test scripts that currently rely on the initial branch name being `master by default. To determine which test scripts to mark up, the first step was to force-set the default branch name to `master` in - all test scripts that contain the keyword `master`, - t4211, which expects `t/t4211/history.export` with a hard-coded ref to initialize the default branch, - t5560 because it sources `t/t556x_common` which uses `master`, - t8002 and t8012 because both source `t/annotate-tests.sh` which also uses `master`) This trick was performed by this command: $ sed -i '/^ *\. \.\/\(test-lib\|lib-\(bash\|cvs\|git-svn\)\|gitweb-lib\)\.sh$/i\ GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME=master\ export GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME\ ' $(git grep -l master t/t[0-9]*.sh) \ t/t4211*.sh t/t5560*.sh t/t8002*.sh t/t8012*.sh After that, careful, manual inspection revealed that some of the test scripts containing the needle `master` do not actually rely on a specific default branch name: either they mention `master` only in a comment, or they initialize that branch specificially, or they do not actually refer to the current default branch. Therefore, the aforementioned modification was undone in those test scripts thusly: $ git checkout HEAD -- \ t/t0027-auto-crlf.sh t/t0060-path-utils.sh \ t/t1011-read-tree-sparse-checkout.sh \ t/t1305-config-include.sh t/t1309-early-config.sh \ t/t1402-check-ref-format.sh t/t1450-fsck.sh \ t/t2024-checkout-dwim.sh \ t/t2106-update-index-assume-unchanged.sh \ t/t3040-subprojects-basic.sh t/t3301-notes.sh \ t/t3308-notes-merge.sh t/t3423-rebase-reword.sh \ t/t3436-rebase-more-options.sh \ t/t4015-diff-whitespace.sh t/t4257-am-interactive.sh \ t/t5323-pack-redundant.sh t/t5401-update-hooks.sh \ t/t5511-refspec.sh t/t5526-fetch-submodules.sh \ t/t5529-push-errors.sh t/t5530-upload-pack-error.sh \ t/t5548-push-porcelain.sh \ t/t5552-skipping-fetch-negotiator.sh \ t/t5572-pull-submodule.sh t/t5608-clone-2gb.sh \ t/t5614-clone-submodules-shallow.sh \ t/t7508-status.sh t/t7606-merge-custom.sh \ t/t9302-fast-import-unpack-limit.sh We excluded one set of test scripts in these commands, though: the range of `git p4` tests. The reason? `git p4` stores the (foreign) remote branch in the branch called `p4/master`, which is obviously not the default branch. Manual analysis revealed that only five of these tests actually require a specific default branch name to pass; They were modified thusly: $ sed -i '/^ *\. \.\/lib-git-p4\.sh$/i\ GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME=master\ export GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME\ ' t/t980[0167]*.sh t/t9811*.sh Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-11-19 00:44:19 +01:00
export GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME
test smart http fetch and push The top level directory "/smart/" of the test Apache server is mapped through our git-http-backend CGI, but uses the same underlying repository space as the server's document root. This is the most simple installation possible. Server logs are checked to verify the client has accessed only the smart URLs during the test. During fetch testing the headers are also logged from libcurl to ensure we are making a reasonably sane HTTP request, and getting back reasonably sane response headers from the CGI. When validating the request headers used during smart fetch we munge away the actual Content-Length and replace it with the placeholder "xxx". This avoids unnecessary varability in the test caused by an unrelated change in the requested capabilities in the first want line of the request. However, we still want to look for and verify that Content-Length was used, because smaller payloads should be using Content-Length and not "Transfer-Encoding: chunked". When validating the server response headers we must discard both Content-Length and Transfer-Encoding, as Apache2 can use either format to return our response. During development of this test I observed Apache returning both forms, depending on when the processes got CPU time. If our CGI returned the pack data quickly, Apache just buffered the whole thing and returned a Content-Length. If our CGI took just a bit too long to complete, Apache flushed its buffer and instead used "Transfer-Encoding: chunked". Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-31 01:47:47 +01:00
. ./test-lib.sh
. "$TEST_DIRECTORY"/lib-httpd.sh
t: run t5551 tests with both HTTP and HTTP/2 We have occasionally seen bugs that affect Git running only against an HTTP/2 web server, not an HTTP one. For instance, b66c77a64e (http: match headers case-insensitively when redacting, 2021-09-22). But since we have no test coverage using HTTP/2, we only uncover these bugs in the wild. That commit gives a recipe for converting our Apache setup to support HTTP/2, but: - it's not necessarily portable - we don't want to just test HTTP/2; we really want to do a variety of basic tests for _both_ protocols This patch handles both problems by running a duplicate of t5551 (labeled as t5559 here) with an alternate-universe setup that enables HTTP/2. So we'll continue to run t5551 as before, but run the same battery of tests again with HTTP/2. If HTTP/2 isn't supported on a given platform, then t5559 should bail during the webserver setup, and gracefully skip all tests (unless GIT_TEST_HTTPD has been changed from "auto" to "yes", where the point is to complain when webserver setup fails). In theory other http-related test scripts could benefit from the same duplication, but doing t5551 should give us a reasonable check of basic functionality, and would have caught both bugs we've seen in the wild with HTTP/2. A few notes on the implementation: - a script enables the server side config by calling enable_http2 before starting the webserver. This avoids even trying to load any HTTP/2 config for t5551 (which is what lets it keep working with regular HTTP even on systems that don't support it). This also sets a prereq which can be used by individual tests. - As discussed in b66c77a64e, the http2 module isn't compatible with the "prefork" mpm, so we need to pick something else. I chose "event" here, which works on my Debian system, but it's possible there are platforms which would prefer something else. We can adjust that later if somebody finds such a platform. - The test "large fetch-pack requests can be sent using chunked encoding" makes sure we use a chunked transfer-encoding by looking for that header in the trace. But since HTTP/2 has its own streaming mechanisms, we won't find such a header. We could skip the test entirely by marking it with !HTTP2. But there's some value in making sure that the fetch itself succeeded. So instead, we'll confirm that either we're using HTTP2 _or_ we saw the expected chunked header. - the redaction tests fail under HTTP/2 with recent versions of curl. This is a bug! I've marked them with !HTTP2 here to skip them under t5559 for the moment. Using test_expect_failure would be more appropriate, but would require a bunch of boilerplate. Since we'll be fixing them momentarily, let's just skip them for now to keep the test suite bisectable, and we can re-enable them in the commit that fixes the bug. - one alternative layout would be to push most of t5551 into a lib-t5551.sh script, then source it from both t5551 and t5559. Keeping t5551 intact seemed a little simpler, as its one less level of indirection for people fixing bugs/regressions in the non-HTTP/2 tests. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-11 23:35:05 +01:00
test "$HTTP_PROTO" = "HTTP/2" && enable_http2
test smart http fetch and push The top level directory "/smart/" of the test Apache server is mapped through our git-http-backend CGI, but uses the same underlying repository space as the server's document root. This is the most simple installation possible. Server logs are checked to verify the client has accessed only the smart URLs during the test. During fetch testing the headers are also logged from libcurl to ensure we are making a reasonably sane HTTP request, and getting back reasonably sane response headers from the CGI. When validating the request headers used during smart fetch we munge away the actual Content-Length and replace it with the placeholder "xxx". This avoids unnecessary varability in the test caused by an unrelated change in the requested capabilities in the first want line of the request. However, we still want to look for and verify that Content-Length was used, because smaller payloads should be using Content-Length and not "Transfer-Encoding: chunked". When validating the server response headers we must discard both Content-Length and Transfer-Encoding, as Apache2 can use either format to return our response. During development of this test I observed Apache returning both forms, depending on when the processes got CPU time. If our CGI returned the pack data quickly, Apache just buffered the whole thing and returned a Content-Length. If our CGI took just a bit too long to complete, Apache flushed its buffer and instead used "Transfer-Encoding: chunked". Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-31 01:47:47 +01:00
start_httpd
t: run t5551 tests with both HTTP and HTTP/2 We have occasionally seen bugs that affect Git running only against an HTTP/2 web server, not an HTTP one. For instance, b66c77a64e (http: match headers case-insensitively when redacting, 2021-09-22). But since we have no test coverage using HTTP/2, we only uncover these bugs in the wild. That commit gives a recipe for converting our Apache setup to support HTTP/2, but: - it's not necessarily portable - we don't want to just test HTTP/2; we really want to do a variety of basic tests for _both_ protocols This patch handles both problems by running a duplicate of t5551 (labeled as t5559 here) with an alternate-universe setup that enables HTTP/2. So we'll continue to run t5551 as before, but run the same battery of tests again with HTTP/2. If HTTP/2 isn't supported on a given platform, then t5559 should bail during the webserver setup, and gracefully skip all tests (unless GIT_TEST_HTTPD has been changed from "auto" to "yes", where the point is to complain when webserver setup fails). In theory other http-related test scripts could benefit from the same duplication, but doing t5551 should give us a reasonable check of basic functionality, and would have caught both bugs we've seen in the wild with HTTP/2. A few notes on the implementation: - a script enables the server side config by calling enable_http2 before starting the webserver. This avoids even trying to load any HTTP/2 config for t5551 (which is what lets it keep working with regular HTTP even on systems that don't support it). This also sets a prereq which can be used by individual tests. - As discussed in b66c77a64e, the http2 module isn't compatible with the "prefork" mpm, so we need to pick something else. I chose "event" here, which works on my Debian system, but it's possible there are platforms which would prefer something else. We can adjust that later if somebody finds such a platform. - The test "large fetch-pack requests can be sent using chunked encoding" makes sure we use a chunked transfer-encoding by looking for that header in the trace. But since HTTP/2 has its own streaming mechanisms, we won't find such a header. We could skip the test entirely by marking it with !HTTP2. But there's some value in making sure that the fetch itself succeeded. So instead, we'll confirm that either we're using HTTP2 _or_ we saw the expected chunked header. - the redaction tests fail under HTTP/2 with recent versions of curl. This is a bug! I've marked them with !HTTP2 here to skip them under t5559 for the moment. Using test_expect_failure would be more appropriate, but would require a bunch of boilerplate. Since we'll be fixing them momentarily, let's just skip them for now to keep the test suite bisectable, and we can re-enable them in the commit that fixes the bug. - one alternative layout would be to push most of t5551 into a lib-t5551.sh script, then source it from both t5551 and t5559. Keeping t5551 intact seemed a little simpler, as its one less level of indirection for people fixing bugs/regressions in the non-HTTP/2 tests. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-11 23:35:05 +01:00
test_expect_success HTTP2 'enable client-side http/2' '
git config --global http.version HTTP/2
'
test smart http fetch and push The top level directory "/smart/" of the test Apache server is mapped through our git-http-backend CGI, but uses the same underlying repository space as the server's document root. This is the most simple installation possible. Server logs are checked to verify the client has accessed only the smart URLs during the test. During fetch testing the headers are also logged from libcurl to ensure we are making a reasonably sane HTTP request, and getting back reasonably sane response headers from the CGI. When validating the request headers used during smart fetch we munge away the actual Content-Length and replace it with the placeholder "xxx". This avoids unnecessary varability in the test caused by an unrelated change in the requested capabilities in the first want line of the request. However, we still want to look for and verify that Content-Length was used, because smaller payloads should be using Content-Length and not "Transfer-Encoding: chunked". When validating the server response headers we must discard both Content-Length and Transfer-Encoding, as Apache2 can use either format to return our response. During development of this test I observed Apache returning both forms, depending on when the processes got CPU time. If our CGI returned the pack data quickly, Apache just buffered the whole thing and returned a Content-Length. If our CGI took just a bit too long to complete, Apache flushed its buffer and instead used "Transfer-Encoding: chunked". Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-31 01:47:47 +01:00
test_expect_success 'setup repository' '
git config push.default matching &&
test smart http fetch and push The top level directory "/smart/" of the test Apache server is mapped through our git-http-backend CGI, but uses the same underlying repository space as the server's document root. This is the most simple installation possible. Server logs are checked to verify the client has accessed only the smart URLs during the test. During fetch testing the headers are also logged from libcurl to ensure we are making a reasonably sane HTTP request, and getting back reasonably sane response headers from the CGI. When validating the request headers used during smart fetch we munge away the actual Content-Length and replace it with the placeholder "xxx". This avoids unnecessary varability in the test caused by an unrelated change in the requested capabilities in the first want line of the request. However, we still want to look for and verify that Content-Length was used, because smaller payloads should be using Content-Length and not "Transfer-Encoding: chunked". When validating the server response headers we must discard both Content-Length and Transfer-Encoding, as Apache2 can use either format to return our response. During development of this test I observed Apache returning both forms, depending on when the processes got CPU time. If our CGI returned the pack data quickly, Apache just buffered the whole thing and returned a Content-Length. If our CGI took just a bit too long to complete, Apache flushed its buffer and instead used "Transfer-Encoding: chunked". Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-31 01:47:47 +01:00
echo content >file &&
git add file &&
git commit -m one
'
test_expect_success 'create http-accessible bare repository' '
mkdir "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
(cd "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
git --bare init
) &&
git remote add public "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
git push public main:main
test smart http fetch and push The top level directory "/smart/" of the test Apache server is mapped through our git-http-backend CGI, but uses the same underlying repository space as the server's document root. This is the most simple installation possible. Server logs are checked to verify the client has accessed only the smart URLs during the test. During fetch testing the headers are also logged from libcurl to ensure we are making a reasonably sane HTTP request, and getting back reasonably sane response headers from the CGI. When validating the request headers used during smart fetch we munge away the actual Content-Length and replace it with the placeholder "xxx". This avoids unnecessary varability in the test caused by an unrelated change in the requested capabilities in the first want line of the request. However, we still want to look for and verify that Content-Length was used, because smaller payloads should be using Content-Length and not "Transfer-Encoding: chunked". When validating the server response headers we must discard both Content-Length and Transfer-Encoding, as Apache2 can use either format to return our response. During development of this test I observed Apache returning both forms, depending on when the processes got CPU time. If our CGI returned the pack data quickly, Apache just buffered the whole thing and returned a Content-Length. If our CGI took just a bit too long to complete, Apache flushed its buffer and instead used "Transfer-Encoding: chunked". Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-31 01:47:47 +01:00
'
setup_askpass_helper
test smart http fetch and push The top level directory "/smart/" of the test Apache server is mapped through our git-http-backend CGI, but uses the same underlying repository space as the server's document root. This is the most simple installation possible. Server logs are checked to verify the client has accessed only the smart URLs during the test. During fetch testing the headers are also logged from libcurl to ensure we are making a reasonably sane HTTP request, and getting back reasonably sane response headers from the CGI. When validating the request headers used during smart fetch we munge away the actual Content-Length and replace it with the placeholder "xxx". This avoids unnecessary varability in the test caused by an unrelated change in the requested capabilities in the first want line of the request. However, we still want to look for and verify that Content-Length was used, because smaller payloads should be using Content-Length and not "Transfer-Encoding: chunked". When validating the server response headers we must discard both Content-Length and Transfer-Encoding, as Apache2 can use either format to return our response. During development of this test I observed Apache returning both forms, depending on when the processes got CPU time. If our CGI returned the pack data quickly, Apache just buffered the whole thing and returned a Content-Length. If our CGI took just a bit too long to complete, Apache flushed its buffer and instead used "Transfer-Encoding: chunked". Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-31 01:47:47 +01:00
test_expect_success 'clone http repository' '
if test_have_prereq HTTP2 && test "$HTTPD_PROTO" = "https"
then
# ALPN lets us immediately use HTTP/2; likewise, POSTs with
# bodies can use it because they do not need to upgrade
INITIAL_PROTO=HTTP/2
else
# either we are not using HTTP/2, or the initial
# request is sent via HTTP/1.1 and asks for upgrade
INITIAL_PROTO=HTTP/1.1
fi &&
cat >exp.raw <<-EOF &&
> GET /smart/repo.git/info/refs?service=git-upload-pack $INITIAL_PROTO
> accept: */*
> accept-encoding: ENCODINGS
> accept-language: ko-KR, *;q=0.9
> pragma: no-cache
{V2} > git-protocol: version=2
t5551: handle HTTP/2 when checking curl trace We check that the curl trace of a clone has the lines we expect, but this won't work when we run the test under t5559, because a few details are different under HTTP/2 (but nobody noticed because it only happens when you manually set GIT_TEST_PROTOCOL_VERSION to "0"). We can handle both HTTP protocols with a few tweaks: - we'll drop the HTTP "101 Switching Protocols" response, as well as various protocol upgrade headers. These details aren't interesting to us. We just want to make sure the correct protocol was used (and we do in the main request/response lines). - successful HTTP/2 responses just say "200" and not "200 OK"; we can normalize these - replace HTTP/1.1 with a variable in the request/response lines. We can use the existing $HTTP_PROTO for this, as it's already set to "HTTP/2" when appropriate. We do need to tweak the fallback value to "HTTP/1.1" to match what curl will write (prior to this patch, the fallback value didn't matter at all; we only checked if it was the literal string "HTTP/2"). Note that several lines still expect HTTP/1.1 unconditionally. The first request does so because the client requests an upgrade during the request. The POST request and response do so because you can't do an upgrade if there is a request body. (This will all be different if we trigger HTTP/2 via ALPN, but the tests aren't yet capable of that). This is enough to let: GIT_TEST_PROTOCOL_VERSION=0 ./t5559-http-fetch-smart-http2.sh pass the "clone http repository" test (but there are some other failures later on). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-02-23 11:56:17 +01:00
< $HTTP_PROTO 200 OK
< pragma: no-cache
< cache-control: no-cache, max-age=0, must-revalidate
< content-type: application/x-git-upload-pack-advertisement
> POST /smart/repo.git/git-upload-pack $INITIAL_PROTO
> accept-encoding: ENCODINGS
> content-type: application/x-git-upload-pack-request
> accept: application/x-git-upload-pack-result
> accept-language: ko-KR, *;q=0.9
{V2} > git-protocol: version=2
> content-length: xxx
< $INITIAL_PROTO 200 OK
< pragma: no-cache
< cache-control: no-cache, max-age=0, must-revalidate
< content-type: application/x-git-upload-pack-result
{V2} > POST /smart/repo.git/git-upload-pack $INITIAL_PROTO
{V2} > accept-encoding: ENCODINGS
{V2} > content-type: application/x-git-upload-pack-request
{V2} > accept: application/x-git-upload-pack-result
{V2} > accept-language: ko-KR, *;q=0.9
{V2} > git-protocol: version=2
{V2} > content-length: xxx
{V2} < $INITIAL_PROTO 200 OK
{V2} < pragma: no-cache
{V2} < cache-control: no-cache, max-age=0, must-revalidate
{V2} < content-type: application/x-git-upload-pack-result
EOF
if test "$GIT_TEST_PROTOCOL_VERSION" = 0
then
sed "/^{V2}/d" <exp.raw >exp
else
sed "s/^{V2} //" <exp.raw >exp
fi &&
GIT_TRACE_CURL=true LANGUAGE="ko_KR.UTF-8" \
git clone --quiet $HTTPD_URL/smart/repo.git clone 2>err &&
test smart http fetch and push The top level directory "/smart/" of the test Apache server is mapped through our git-http-backend CGI, but uses the same underlying repository space as the server's document root. This is the most simple installation possible. Server logs are checked to verify the client has accessed only the smart URLs during the test. During fetch testing the headers are also logged from libcurl to ensure we are making a reasonably sane HTTP request, and getting back reasonably sane response headers from the CGI. When validating the request headers used during smart fetch we munge away the actual Content-Length and replace it with the placeholder "xxx". This avoids unnecessary varability in the test caused by an unrelated change in the requested capabilities in the first want line of the request. However, we still want to look for and verify that Content-Length was used, because smaller payloads should be using Content-Length and not "Transfer-Encoding: chunked". When validating the server response headers we must discard both Content-Length and Transfer-Encoding, as Apache2 can use either format to return our response. During development of this test I observed Apache returning both forms, depending on when the processes got CPU time. If our CGI returned the pack data quickly, Apache just buffered the whole thing and returned a Content-Length. If our CGI took just a bit too long to complete, Apache flushed its buffer and instead used "Transfer-Encoding: chunked". Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-31 01:47:47 +01:00
test_cmp file clone/file &&
tr '\''\015'\'' Q <err |
perl -pe '\''
s/(Send|Recv) header: ([A-Za-z0-9-]+):/
"$1 header: " . lc($2) . ":"
/e;
'\'' |
test smart http fetch and push The top level directory "/smart/" of the test Apache server is mapped through our git-http-backend CGI, but uses the same underlying repository space as the server's document root. This is the most simple installation possible. Server logs are checked to verify the client has accessed only the smart URLs during the test. During fetch testing the headers are also logged from libcurl to ensure we are making a reasonably sane HTTP request, and getting back reasonably sane response headers from the CGI. When validating the request headers used during smart fetch we munge away the actual Content-Length and replace it with the placeholder "xxx". This avoids unnecessary varability in the test caused by an unrelated change in the requested capabilities in the first want line of the request. However, we still want to look for and verify that Content-Length was used, because smaller payloads should be using Content-Length and not "Transfer-Encoding: chunked". When validating the server response headers we must discard both Content-Length and Transfer-Encoding, as Apache2 can use either format to return our response. During development of this test I observed Apache returning both forms, depending on when the processes got CPU time. If our CGI returned the pack data quickly, Apache just buffered the whole thing and returned a Content-Length. If our CGI took just a bit too long to complete, Apache flushed its buffer and instead used "Transfer-Encoding: chunked". Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-31 01:47:47 +01:00
sed -e "
s/Q\$//
/^[^<=]/d
/^== Info:/d
/^=> Send header, /d
/^=> Send header:$/d
/^<= Recv header, /d
/^<= Recv header:$/d
s/=> Send header: //
s/= Recv header://
/^<= Recv data/d
/^=> Send data/d
/^<= Recv SSL data/d
/^=> Send SSL data/d
/^$/d
/^< $/d
test smart http fetch and push The top level directory "/smart/" of the test Apache server is mapped through our git-http-backend CGI, but uses the same underlying repository space as the server's document root. This is the most simple installation possible. Server logs are checked to verify the client has accessed only the smart URLs during the test. During fetch testing the headers are also logged from libcurl to ensure we are making a reasonably sane HTTP request, and getting back reasonably sane response headers from the CGI. When validating the request headers used during smart fetch we munge away the actual Content-Length and replace it with the placeholder "xxx". This avoids unnecessary varability in the test caused by an unrelated change in the requested capabilities in the first want line of the request. However, we still want to look for and verify that Content-Length was used, because smaller payloads should be using Content-Length and not "Transfer-Encoding: chunked". When validating the server response headers we must discard both Content-Length and Transfer-Encoding, as Apache2 can use either format to return our response. During development of this test I observed Apache returning both forms, depending on when the processes got CPU time. If our CGI returned the pack data quickly, Apache just buffered the whole thing and returned a Content-Length. If our CGI took just a bit too long to complete, Apache flushed its buffer and instead used "Transfer-Encoding: chunked". Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-31 01:47:47 +01:00
/^[^><]/{
s/^/> /
}
t5551: handle HTTP/2 when checking curl trace We check that the curl trace of a clone has the lines we expect, but this won't work when we run the test under t5559, because a few details are different under HTTP/2 (but nobody noticed because it only happens when you manually set GIT_TEST_PROTOCOL_VERSION to "0"). We can handle both HTTP protocols with a few tweaks: - we'll drop the HTTP "101 Switching Protocols" response, as well as various protocol upgrade headers. These details aren't interesting to us. We just want to make sure the correct protocol was used (and we do in the main request/response lines). - successful HTTP/2 responses just say "200" and not "200 OK"; we can normalize these - replace HTTP/1.1 with a variable in the request/response lines. We can use the existing $HTTP_PROTO for this, as it's already set to "HTTP/2" when appropriate. We do need to tweak the fallback value to "HTTP/1.1" to match what curl will write (prior to this patch, the fallback value didn't matter at all; we only checked if it was the literal string "HTTP/2"). Note that several lines still expect HTTP/1.1 unconditionally. The first request does so because the client requests an upgrade during the request. The POST request and response do so because you can't do an upgrade if there is a request body. (This will all be different if we trigger HTTP/2 via ALPN, but the tests aren't yet capable of that). This is enough to let: GIT_TEST_PROTOCOL_VERSION=0 ./t5559-http-fetch-smart-http2.sh pass the "clone http repository" test (but there are some other failures later on). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-02-23 11:56:17 +01:00
/^< HTTP/ {
s/200$/200 OK/
}
/^< HTTP\\/1.1 101/d
/^[><] connection: /d
/^[><] upgrade: /d
/^> http2-settings: /d
/^> user-agent: /d
/^> host: /d
/^> POST /,$ {
/^> Accept: [*]\\/[*]/d
}
s/^> content-length: .*/> content-length: xxx/
/^> 00..want /d
/^> 00.*done/d
test smart http fetch and push The top level directory "/smart/" of the test Apache server is mapped through our git-http-backend CGI, but uses the same underlying repository space as the server's document root. This is the most simple installation possible. Server logs are checked to verify the client has accessed only the smart URLs during the test. During fetch testing the headers are also logged from libcurl to ensure we are making a reasonably sane HTTP request, and getting back reasonably sane response headers from the CGI. When validating the request headers used during smart fetch we munge away the actual Content-Length and replace it with the placeholder "xxx". This avoids unnecessary varability in the test caused by an unrelated change in the requested capabilities in the first want line of the request. However, we still want to look for and verify that Content-Length was used, because smaller payloads should be using Content-Length and not "Transfer-Encoding: chunked". When validating the server response headers we must discard both Content-Length and Transfer-Encoding, as Apache2 can use either format to return our response. During development of this test I observed Apache returning both forms, depending on when the processes got CPU time. If our CGI returned the pack data quickly, Apache just buffered the whole thing and returned a Content-Length. If our CGI took just a bit too long to complete, Apache flushed its buffer and instead used "Transfer-Encoding: chunked". Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-31 01:47:47 +01:00
/^< server: /d
/^< expires: /d
/^< date: /d
/^< content-length: /d
/^< transfer-encoding: /d
" >actual &&
sed -e "s/^> accept-encoding: .*/> accept-encoding: ENCODINGS/" \
actual >actual.smudged &&
test_cmp exp actual.smudged &&
grep "accept-encoding:.*gzip" actual >actual.gzip
test smart http fetch and push The top level directory "/smart/" of the test Apache server is mapped through our git-http-backend CGI, but uses the same underlying repository space as the server's document root. This is the most simple installation possible. Server logs are checked to verify the client has accessed only the smart URLs during the test. During fetch testing the headers are also logged from libcurl to ensure we are making a reasonably sane HTTP request, and getting back reasonably sane response headers from the CGI. When validating the request headers used during smart fetch we munge away the actual Content-Length and replace it with the placeholder "xxx". This avoids unnecessary varability in the test caused by an unrelated change in the requested capabilities in the first want line of the request. However, we still want to look for and verify that Content-Length was used, because smaller payloads should be using Content-Length and not "Transfer-Encoding: chunked". When validating the server response headers we must discard both Content-Length and Transfer-Encoding, as Apache2 can use either format to return our response. During development of this test I observed Apache returning both forms, depending on when the processes got CPU time. If our CGI returned the pack data quickly, Apache just buffered the whole thing and returned a Content-Length. If our CGI took just a bit too long to complete, Apache flushed its buffer and instead used "Transfer-Encoding: chunked". Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-31 01:47:47 +01:00
'
test_expect_success 'fetch changes via http' '
echo content >>file &&
git commit -a -m two &&
git push public &&
test smart http fetch and push The top level directory "/smart/" of the test Apache server is mapped through our git-http-backend CGI, but uses the same underlying repository space as the server's document root. This is the most simple installation possible. Server logs are checked to verify the client has accessed only the smart URLs during the test. During fetch testing the headers are also logged from libcurl to ensure we are making a reasonably sane HTTP request, and getting back reasonably sane response headers from the CGI. When validating the request headers used during smart fetch we munge away the actual Content-Length and replace it with the placeholder "xxx". This avoids unnecessary varability in the test caused by an unrelated change in the requested capabilities in the first want line of the request. However, we still want to look for and verify that Content-Length was used, because smaller payloads should be using Content-Length and not "Transfer-Encoding: chunked". When validating the server response headers we must discard both Content-Length and Transfer-Encoding, as Apache2 can use either format to return our response. During development of this test I observed Apache returning both forms, depending on when the processes got CPU time. If our CGI returned the pack data quickly, Apache just buffered the whole thing and returned a Content-Length. If our CGI took just a bit too long to complete, Apache flushed its buffer and instead used "Transfer-Encoding: chunked". Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-31 01:47:47 +01:00
(cd clone && git pull) &&
test_cmp file clone/file
'
test_expect_success 'used upload-pack service' '
strip_access_log >log &&
grep "GET /smart/repo.git/info/refs?service=git-upload-pack HTTP/[0-9.]* 200" log &&
grep "POST /smart/repo.git/git-upload-pack HTTP/[0-9.]* 200" log
test smart http fetch and push The top level directory "/smart/" of the test Apache server is mapped through our git-http-backend CGI, but uses the same underlying repository space as the server's document root. This is the most simple installation possible. Server logs are checked to verify the client has accessed only the smart URLs during the test. During fetch testing the headers are also logged from libcurl to ensure we are making a reasonably sane HTTP request, and getting back reasonably sane response headers from the CGI. When validating the request headers used during smart fetch we munge away the actual Content-Length and replace it with the placeholder "xxx". This avoids unnecessary varability in the test caused by an unrelated change in the requested capabilities in the first want line of the request. However, we still want to look for and verify that Content-Length was used, because smaller payloads should be using Content-Length and not "Transfer-Encoding: chunked". When validating the server response headers we must discard both Content-Length and Transfer-Encoding, as Apache2 can use either format to return our response. During development of this test I observed Apache returning both forms, depending on when the processes got CPU time. If our CGI returned the pack data quickly, Apache just buffered the whole thing and returned a Content-Length. If our CGI took just a bit too long to complete, Apache flushed its buffer and instead used "Transfer-Encoding: chunked". Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-31 01:47:47 +01:00
'
test_expect_success 'follow redirects (301)' '
git clone $HTTPD_URL/smart-redir-perm/repo.git --quiet repo-p
'
test_expect_success 'follow redirects (302)' '
git clone $HTTPD_URL/smart-redir-temp/repo.git --quiet repo-t
'
remote-curl: rewrite base url from info/refs redirects For efficiency and security reasons, an earlier commit in this series taught http_get_* to re-write the base url based on redirections we saw while making a specific request. This commit wires that option into the info/refs request, meaning that a redirect from http://example.com/foo.git/info/refs to https://example.com/bar.git/info/refs will behave as if "https://example.com/bar.git" had been provided to git in the first place. The tests bear some explanation. We introduce two new hierearchies into the httpd test config: 1. Requests to /smart-redir-limited will work only for the initial info/refs request, but not any subsequent requests. As a result, we can confirm whether the client is re-rooting its requests after the initial contact, since otherwise it will fail (it will ask for "repo.git/git-upload-pack", which is not redirected). 2. Requests to smart-redir-auth will redirect, and require auth after the redirection. Since we are using the redirected base for further requests, we also update the credential struct, in order not to mislead the user (or credential helpers) about which credential is needed. We can therefore check the GIT_ASKPASS prompts to make sure we are prompting for the new location. Because we have neither multiple servers nor https support in our test setup, we can only redirect between paths, meaning we need to turn on credential.useHttpPath to see the difference. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-09-28 10:35:35 +02:00
test_expect_success 'redirects re-root further requests' '
git clone $HTTPD_URL/smart-redir-limited/repo.git repo-redir-limited
'
http: always update the base URL for redirects If a malicious server redirects the initial ref advertisement, it may be able to leak sha1s from other, unrelated servers that the client has access to. For example, imagine that Alice is a git user, she has access to a private repository on a server hosted by Bob, and Mallory runs a malicious server and wants to find out about Bob's private repository. Mallory asks Alice to clone an unrelated repository from her over HTTP. When Alice's client contacts Mallory's server for the initial ref advertisement, the server issues an HTTP redirect for Bob's server. Alice contacts Bob's server and gets the ref advertisement for the private repository. If there is anything to fetch, she then follows up by asking the server for one or more sha1 objects. But who is the server? If it is still Mallory's server, then Alice will leak the existence of those sha1s to her. Since commit c93c92f30 (http: update base URLs when we see redirects, 2013-09-28), the client usually rewrites the base URL such that all further requests will go to Bob's server. But this is done by textually matching the URL. If we were originally looking for "http://mallory/repo.git/info/refs", and we got pointed at "http://bob/other.git/info/refs", then we know that the right root is "http://bob/other.git". If the redirect appears to change more than just the root, we punt and continue to use the original server. E.g., imagine the redirect adds a URL component that Bob's server will ignore, like "http://bob/other.git/info/refs?dummy=1". We can solve this by aborting in this case rather than silently continuing to use Mallory's server. In addition to protecting from sha1 leakage, it's arguably safer and more sane to refuse a confusing redirect like that in general. For example, part of the motivation in c93c92f30 is avoiding accidentally sending credentials over clear http, just to get a response that says "try again over https". So even in a non-malicious case, we'd prefer to err on the side of caution. The downside is that it's possible this will break a legitimate but complicated server-side redirection scheme. The setup given in the newly added test does work, but it's convoluted enough that we don't need to care about it. A more plausible case would be a server which redirects a request for "info/refs?service=git-upload-pack" to just "info/refs" (because it does not do smart HTTP, and for some reason really dislikes query parameters). Right now we would transparently downgrade to dumb-http, but with this patch, we'd complain (and the user would have to set GIT_SMART_HTTP=0 to fetch). Reported-by: Jann Horn <jannh@google.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-06 19:24:35 +01:00
test_expect_success 're-rooting dies on insane schemes' '
test_must_fail git clone $HTTPD_URL/insane-redir/repo.git insane
'
test_expect_success 'clone from password-protected repository' '
echo two >expect &&
set_askpass user@host pass@host &&
git clone --bare "$HTTPD_URL/auth/smart/repo.git" smart-auth &&
expect_askpass both user@host &&
git --git-dir=smart-auth log -1 --format=%s >actual &&
test_cmp expect actual
'
test_expect_success 'clone from auth-only-for-push repository' '
echo two >expect &&
set_askpass wrong &&
git clone --bare "$HTTPD_URL/auth-push/smart/repo.git" smart-noauth &&
expect_askpass none &&
git --git-dir=smart-noauth log -1 --format=%s >actual &&
test_cmp expect actual
'
test_expect_success 'clone from auth-only-for-objects repository' '
echo two >expect &&
set_askpass user@host pass@host &&
git clone --bare "$HTTPD_URL/auth-fetch/smart/repo.git" half-auth &&
expect_askpass both user@host &&
git --git-dir=half-auth log -1 --format=%s >actual &&
test_cmp expect actual
'
test_expect_success 'no-op half-auth fetch does not require a password' '
set_askpass wrong &&
# NEEDSWORK: When using HTTP(S), protocol v0 supports a "half-auth"
# configuration with authentication required only when downloading
# objects and not refs, by having the HTTP server only require
# authentication for the "git-upload-pack" path and not "info/refs".
# This is not possible with protocol v2, since both objects and refs
# are obtained from the "git-upload-pack" path. A solution to this is
# to teach the server and client to be able to inline ls-refs requests
# as an Extra Parameter (see "git help gitformat-pack-protocol"), so that
# "info/refs" can serve refs, just like it does in protocol v0.
GIT_TEST_PROTOCOL_VERSION=0 git --git-dir=half-auth fetch &&
expect_askpass none
'
remote-curl: rewrite base url from info/refs redirects For efficiency and security reasons, an earlier commit in this series taught http_get_* to re-write the base url based on redirections we saw while making a specific request. This commit wires that option into the info/refs request, meaning that a redirect from http://example.com/foo.git/info/refs to https://example.com/bar.git/info/refs will behave as if "https://example.com/bar.git" had been provided to git in the first place. The tests bear some explanation. We introduce two new hierearchies into the httpd test config: 1. Requests to /smart-redir-limited will work only for the initial info/refs request, but not any subsequent requests. As a result, we can confirm whether the client is re-rooting its requests after the initial contact, since otherwise it will fail (it will ask for "repo.git/git-upload-pack", which is not redirected). 2. Requests to smart-redir-auth will redirect, and require auth after the redirection. Since we are using the redirected base for further requests, we also update the credential struct, in order not to mislead the user (or credential helpers) about which credential is needed. We can therefore check the GIT_ASKPASS prompts to make sure we are prompting for the new location. Because we have neither multiple servers nor https support in our test setup, we can only redirect between paths, meaning we need to turn on credential.useHttpPath to see the difference. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-09-28 10:35:35 +02:00
test_expect_success 'redirects send auth to new location' '
set_askpass user@host pass@host &&
remote-curl: rewrite base url from info/refs redirects For efficiency and security reasons, an earlier commit in this series taught http_get_* to re-write the base url based on redirections we saw while making a specific request. This commit wires that option into the info/refs request, meaning that a redirect from http://example.com/foo.git/info/refs to https://example.com/bar.git/info/refs will behave as if "https://example.com/bar.git" had been provided to git in the first place. The tests bear some explanation. We introduce two new hierearchies into the httpd test config: 1. Requests to /smart-redir-limited will work only for the initial info/refs request, but not any subsequent requests. As a result, we can confirm whether the client is re-rooting its requests after the initial contact, since otherwise it will fail (it will ask for "repo.git/git-upload-pack", which is not redirected). 2. Requests to smart-redir-auth will redirect, and require auth after the redirection. Since we are using the redirected base for further requests, we also update the credential struct, in order not to mislead the user (or credential helpers) about which credential is needed. We can therefore check the GIT_ASKPASS prompts to make sure we are prompting for the new location. Because we have neither multiple servers nor https support in our test setup, we can only redirect between paths, meaning we need to turn on credential.useHttpPath to see the difference. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-09-28 10:35:35 +02:00
git -c credential.useHttpPath=true \
clone $HTTPD_URL/smart-redir-auth/repo.git repo-redir-auth &&
expect_askpass both user@host auth/smart/repo.git
'
test_expect_success 'GIT_TRACE_CURL redacts auth details' '
rm -rf redact-auth trace &&
set_askpass user@host pass@host &&
GIT_TRACE_CURL="$(pwd)/trace" git clone --bare "$HTTPD_URL/auth/smart/repo.git" redact-auth &&
expect_askpass both user@host &&
# Ensure that there is no "Basic" followed by a base64 string, but that
# the auth details are redacted
http: match headers case-insensitively when redacting When HTTP/2 is in use, we fail to correctly redact "Authorization" (and other) headers in our GIT_TRACE_CURL output. We get the headers in our CURLOPT_DEBUGFUNCTION callback, curl_trace(). It passes them along to curl_dump_header(), which in turn checks redact_sensitive_header(). We see the headers as a text buffer like: Host: ... Authorization: Basic ... After breaking it into lines, we match each header using skip_prefix(). This is case-sensitive, even though HTTP headers are case-insensitive. This has worked reliably in the past because these headers are generated by curl itself, which is predictable in what it sends. But when HTTP/2 is in use, instead we get a lower-case "authorization:" header, and we fail to match it. The fix is simple: we should match with skip_iprefix(). Testing is more complicated, though. We do have a test for the redacting feature, but we don't hit the problem case because our test Apache setup does not understand HTTP/2. You can reproduce the issue by applying this on top of the test change in this patch: diff --git a/t/lib-httpd/apache.conf b/t/lib-httpd/apache.conf index afa91e38b0..19267c7107 100644 --- a/t/lib-httpd/apache.conf +++ b/t/lib-httpd/apache.conf @@ -29,6 +29,9 @@ ErrorLog error.log LoadModule setenvif_module modules/mod_setenvif.so </IfModule> +LoadModule http2_module modules/mod_http2.so +Protocols h2c + <IfVersion < 2.4> LockFile accept.lock </IfVersion> @@ -64,8 +67,8 @@ LockFile accept.lock <IfModule !mod_access_compat.c> LoadModule access_compat_module modules/mod_access_compat.so </IfModule> -<IfModule !mod_mpm_prefork.c> - LoadModule mpm_prefork_module modules/mod_mpm_prefork.so +<IfModule !mod_mpm_event.c> + LoadModule mpm_event_module modules/mod_mpm_event.so </IfModule> <IfModule !mod_unixd.c> LoadModule unixd_module modules/mod_unixd.so diff --git a/t/t5551-http-fetch-smart.sh b/t/t5551-http-fetch-smart.sh index 1c2a444ae7..ff74f0ae8a 100755 --- a/t/t5551-http-fetch-smart.sh +++ b/t/t5551-http-fetch-smart.sh @@ -24,6 +24,10 @@ test_expect_success 'create http-accessible bare repository' ' git push public main:main ' +test_expect_success 'prefer http/2' ' + git config --global http.version HTTP/2 +' + setup_askpass_helper test_expect_success 'clone http repository' ' but this has a few issues: - it's not necessarily portable. The http2 apache module might not be available on all systems. Further, the http2 module isn't compatible with the prefork mpm, so we have to switch to something else. But we don't necessarily know what's available. It would be nice if we could have conditional config, but IfModule only tells us if a module is already loaded, not whether it is available at all. This might be a non-issue. The http tests are already optional, and modern-enough systems may just have both of these. But... - if we do this, then we'd no longer be testing HTTP/1.1 at all. I'm not sure how much that matters since it's all handled by curl under the hood, but I'd worry that some detail leaks through. We'd probably want two scripts running similar tests, one with HTTP/2 and one with HTTP/1.1. - speaking of which, a later test fails with the patch above! The problem is that it is making sure we used a chunked transfer-encoding by looking for that header in the trace. But HTTP/2 doesn't support that, as it has its own streaming mechanisms (the overall operation works fine; we just don't see the header in the trace). Furthermore, even with the changes above, this test still does not detect the current failure, because we see _both_ HTTP/1.1 and HTTP/2 requests, which confuse it. Quoting only the interesting bits from the resulting trace file, we first see: => Send header: GET /auth/smart/repo.git/info/refs?service=git-upload-pack HTTP/1.1 => Send header: Connection: Upgrade, HTTP2-Settings => Send header: Upgrade: h2c => Send header: HTTP2-Settings: AAMAAABkAAQCAAAAAAIAAAAA <= Recv header: HTTP/1.1 401 Unauthorized <= Recv header: Date: Wed, 22 Sep 2021 20:03:32 GMT <= Recv header: Server: Apache/2.4.49 (Debian) <= Recv header: WWW-Authenticate: Basic realm="git-auth" So the client asks for HTTP/2, but Apache does not do the upgrade for the 401 response. Then the client repeats with credentials: => Send header: GET /auth/smart/repo.git/info/refs?service=git-upload-pack HTTP/1.1 => Send header: Authorization: Basic <redacted> => Send header: Connection: Upgrade, HTTP2-Settings => Send header: Upgrade: h2c => Send header: HTTP2-Settings: AAMAAABkAAQCAAAAAAIAAAAA <= Recv header: HTTP/1.1 101 Switching Protocols <= Recv header: Upgrade: h2c <= Recv header: Connection: Upgrade <= Recv header: HTTP/2 200 <= Recv header: content-type: application/x-git-upload-pack-advertisement So the client does properly redact there, because we're speaking HTTP/1.1, and the server indicates it can do the upgrade. And then the client will make further requests using HTTP/2: => Send header: POST /auth/smart/repo.git/git-upload-pack HTTP/2 => Send header: authorization: Basic dXNlckBob3N0OnBhc3NAaG9zdA== => Send header: content-type: application/x-git-upload-pack-request And there we can see that the credential is _not_ redacted. This part of the test is what gets confused: # Ensure that there is no "Basic" followed by a base64 string, but that # the auth details are redacted ! grep "Authorization: Basic [0-9a-zA-Z+/]" trace && grep "Authorization: Basic <redacted>" trace The first grep does not match the un-redacted HTTP/2 header, because it insists on an uppercase "A". And the second one does find the HTTP/1.1 header. So as far as the test is concerned, everything is OK, but it failed to notice the un-redacted lines. We can make this test (and the other related ones) more robust by adding "-i" to grep case-insensitively. This isn't really doing anything for now, since we're not actually speaking HTTP/2, but it future-proofs the tests for a day when we do (either we add explicit HTTP/2 test support, or it's eventually enabled by default by our Apache+curl test setup). And it doesn't hurt in the meantime for the tests to be more careful. The change to use "grep -i", coupled with the changes to use HTTP/2 shown above, causes the test to fail with the current code, and pass after this patch is applied. And finally, there's one other way to demonstrate the issue (and how I actually found it originally). Looking at GIT_TRACE_CURL output against github.com, you'll see the unredacted output, even if you didn't set http.version. That's because setting it is only necessary for curl to send the extra headers in its HTTP/1.1 request that say "Hey, I speak HTTP/2; upgrade if you do, too". But for a production site speaking https, the server advertises via ALPN, a TLS extension, that it supports HTTP/2, and the client can immediately start using it. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-23 00:11:53 +02:00
! grep -i "Authorization: Basic [0-9a-zA-Z+/]" trace &&
grep -i "Authorization: Basic <redacted>" trace
'
test_expect_success 'GIT_CURL_VERBOSE redacts auth details' '
rm -rf redact-auth trace &&
set_askpass user@host pass@host &&
GIT_CURL_VERBOSE=1 git clone --bare "$HTTPD_URL/auth/smart/repo.git" redact-auth 2>trace &&
expect_askpass both user@host &&
# Ensure that there is no "Basic" followed by a base64 string, but that
# the auth details are redacted
http: match headers case-insensitively when redacting When HTTP/2 is in use, we fail to correctly redact "Authorization" (and other) headers in our GIT_TRACE_CURL output. We get the headers in our CURLOPT_DEBUGFUNCTION callback, curl_trace(). It passes them along to curl_dump_header(), which in turn checks redact_sensitive_header(). We see the headers as a text buffer like: Host: ... Authorization: Basic ... After breaking it into lines, we match each header using skip_prefix(). This is case-sensitive, even though HTTP headers are case-insensitive. This has worked reliably in the past because these headers are generated by curl itself, which is predictable in what it sends. But when HTTP/2 is in use, instead we get a lower-case "authorization:" header, and we fail to match it. The fix is simple: we should match with skip_iprefix(). Testing is more complicated, though. We do have a test for the redacting feature, but we don't hit the problem case because our test Apache setup does not understand HTTP/2. You can reproduce the issue by applying this on top of the test change in this patch: diff --git a/t/lib-httpd/apache.conf b/t/lib-httpd/apache.conf index afa91e38b0..19267c7107 100644 --- a/t/lib-httpd/apache.conf +++ b/t/lib-httpd/apache.conf @@ -29,6 +29,9 @@ ErrorLog error.log LoadModule setenvif_module modules/mod_setenvif.so </IfModule> +LoadModule http2_module modules/mod_http2.so +Protocols h2c + <IfVersion < 2.4> LockFile accept.lock </IfVersion> @@ -64,8 +67,8 @@ LockFile accept.lock <IfModule !mod_access_compat.c> LoadModule access_compat_module modules/mod_access_compat.so </IfModule> -<IfModule !mod_mpm_prefork.c> - LoadModule mpm_prefork_module modules/mod_mpm_prefork.so +<IfModule !mod_mpm_event.c> + LoadModule mpm_event_module modules/mod_mpm_event.so </IfModule> <IfModule !mod_unixd.c> LoadModule unixd_module modules/mod_unixd.so diff --git a/t/t5551-http-fetch-smart.sh b/t/t5551-http-fetch-smart.sh index 1c2a444ae7..ff74f0ae8a 100755 --- a/t/t5551-http-fetch-smart.sh +++ b/t/t5551-http-fetch-smart.sh @@ -24,6 +24,10 @@ test_expect_success 'create http-accessible bare repository' ' git push public main:main ' +test_expect_success 'prefer http/2' ' + git config --global http.version HTTP/2 +' + setup_askpass_helper test_expect_success 'clone http repository' ' but this has a few issues: - it's not necessarily portable. The http2 apache module might not be available on all systems. Further, the http2 module isn't compatible with the prefork mpm, so we have to switch to something else. But we don't necessarily know what's available. It would be nice if we could have conditional config, but IfModule only tells us if a module is already loaded, not whether it is available at all. This might be a non-issue. The http tests are already optional, and modern-enough systems may just have both of these. But... - if we do this, then we'd no longer be testing HTTP/1.1 at all. I'm not sure how much that matters since it's all handled by curl under the hood, but I'd worry that some detail leaks through. We'd probably want two scripts running similar tests, one with HTTP/2 and one with HTTP/1.1. - speaking of which, a later test fails with the patch above! The problem is that it is making sure we used a chunked transfer-encoding by looking for that header in the trace. But HTTP/2 doesn't support that, as it has its own streaming mechanisms (the overall operation works fine; we just don't see the header in the trace). Furthermore, even with the changes above, this test still does not detect the current failure, because we see _both_ HTTP/1.1 and HTTP/2 requests, which confuse it. Quoting only the interesting bits from the resulting trace file, we first see: => Send header: GET /auth/smart/repo.git/info/refs?service=git-upload-pack HTTP/1.1 => Send header: Connection: Upgrade, HTTP2-Settings => Send header: Upgrade: h2c => Send header: HTTP2-Settings: AAMAAABkAAQCAAAAAAIAAAAA <= Recv header: HTTP/1.1 401 Unauthorized <= Recv header: Date: Wed, 22 Sep 2021 20:03:32 GMT <= Recv header: Server: Apache/2.4.49 (Debian) <= Recv header: WWW-Authenticate: Basic realm="git-auth" So the client asks for HTTP/2, but Apache does not do the upgrade for the 401 response. Then the client repeats with credentials: => Send header: GET /auth/smart/repo.git/info/refs?service=git-upload-pack HTTP/1.1 => Send header: Authorization: Basic <redacted> => Send header: Connection: Upgrade, HTTP2-Settings => Send header: Upgrade: h2c => Send header: HTTP2-Settings: AAMAAABkAAQCAAAAAAIAAAAA <= Recv header: HTTP/1.1 101 Switching Protocols <= Recv header: Upgrade: h2c <= Recv header: Connection: Upgrade <= Recv header: HTTP/2 200 <= Recv header: content-type: application/x-git-upload-pack-advertisement So the client does properly redact there, because we're speaking HTTP/1.1, and the server indicates it can do the upgrade. And then the client will make further requests using HTTP/2: => Send header: POST /auth/smart/repo.git/git-upload-pack HTTP/2 => Send header: authorization: Basic dXNlckBob3N0OnBhc3NAaG9zdA== => Send header: content-type: application/x-git-upload-pack-request And there we can see that the credential is _not_ redacted. This part of the test is what gets confused: # Ensure that there is no "Basic" followed by a base64 string, but that # the auth details are redacted ! grep "Authorization: Basic [0-9a-zA-Z+/]" trace && grep "Authorization: Basic <redacted>" trace The first grep does not match the un-redacted HTTP/2 header, because it insists on an uppercase "A". And the second one does find the HTTP/1.1 header. So as far as the test is concerned, everything is OK, but it failed to notice the un-redacted lines. We can make this test (and the other related ones) more robust by adding "-i" to grep case-insensitively. This isn't really doing anything for now, since we're not actually speaking HTTP/2, but it future-proofs the tests for a day when we do (either we add explicit HTTP/2 test support, or it's eventually enabled by default by our Apache+curl test setup). And it doesn't hurt in the meantime for the tests to be more careful. The change to use "grep -i", coupled with the changes to use HTTP/2 shown above, causes the test to fail with the current code, and pass after this patch is applied. And finally, there's one other way to demonstrate the issue (and how I actually found it originally). Looking at GIT_TRACE_CURL output against github.com, you'll see the unredacted output, even if you didn't set http.version. That's because setting it is only necessary for curl to send the extra headers in its HTTP/1.1 request that say "Hey, I speak HTTP/2; upgrade if you do, too". But for a production site speaking https, the server advertises via ALPN, a TLS extension, that it supports HTTP/2, and the client can immediately start using it. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-23 00:11:53 +02:00
! grep -i "Authorization: Basic [0-9a-zA-Z+/]" trace &&
grep -i "Authorization: Basic <redacted>" trace
'
test_expect_success 'GIT_TRACE_CURL does not redact auth details if GIT_TRACE_REDACT=0' '
rm -rf redact-auth trace &&
set_askpass user@host pass@host &&
GIT_TRACE_REDACT=0 GIT_TRACE_CURL="$(pwd)/trace" \
git clone --bare "$HTTPD_URL/auth/smart/repo.git" redact-auth &&
expect_askpass both user@host &&
http: match headers case-insensitively when redacting When HTTP/2 is in use, we fail to correctly redact "Authorization" (and other) headers in our GIT_TRACE_CURL output. We get the headers in our CURLOPT_DEBUGFUNCTION callback, curl_trace(). It passes them along to curl_dump_header(), which in turn checks redact_sensitive_header(). We see the headers as a text buffer like: Host: ... Authorization: Basic ... After breaking it into lines, we match each header using skip_prefix(). This is case-sensitive, even though HTTP headers are case-insensitive. This has worked reliably in the past because these headers are generated by curl itself, which is predictable in what it sends. But when HTTP/2 is in use, instead we get a lower-case "authorization:" header, and we fail to match it. The fix is simple: we should match with skip_iprefix(). Testing is more complicated, though. We do have a test for the redacting feature, but we don't hit the problem case because our test Apache setup does not understand HTTP/2. You can reproduce the issue by applying this on top of the test change in this patch: diff --git a/t/lib-httpd/apache.conf b/t/lib-httpd/apache.conf index afa91e38b0..19267c7107 100644 --- a/t/lib-httpd/apache.conf +++ b/t/lib-httpd/apache.conf @@ -29,6 +29,9 @@ ErrorLog error.log LoadModule setenvif_module modules/mod_setenvif.so </IfModule> +LoadModule http2_module modules/mod_http2.so +Protocols h2c + <IfVersion < 2.4> LockFile accept.lock </IfVersion> @@ -64,8 +67,8 @@ LockFile accept.lock <IfModule !mod_access_compat.c> LoadModule access_compat_module modules/mod_access_compat.so </IfModule> -<IfModule !mod_mpm_prefork.c> - LoadModule mpm_prefork_module modules/mod_mpm_prefork.so +<IfModule !mod_mpm_event.c> + LoadModule mpm_event_module modules/mod_mpm_event.so </IfModule> <IfModule !mod_unixd.c> LoadModule unixd_module modules/mod_unixd.so diff --git a/t/t5551-http-fetch-smart.sh b/t/t5551-http-fetch-smart.sh index 1c2a444ae7..ff74f0ae8a 100755 --- a/t/t5551-http-fetch-smart.sh +++ b/t/t5551-http-fetch-smart.sh @@ -24,6 +24,10 @@ test_expect_success 'create http-accessible bare repository' ' git push public main:main ' +test_expect_success 'prefer http/2' ' + git config --global http.version HTTP/2 +' + setup_askpass_helper test_expect_success 'clone http repository' ' but this has a few issues: - it's not necessarily portable. The http2 apache module might not be available on all systems. Further, the http2 module isn't compatible with the prefork mpm, so we have to switch to something else. But we don't necessarily know what's available. It would be nice if we could have conditional config, but IfModule only tells us if a module is already loaded, not whether it is available at all. This might be a non-issue. The http tests are already optional, and modern-enough systems may just have both of these. But... - if we do this, then we'd no longer be testing HTTP/1.1 at all. I'm not sure how much that matters since it's all handled by curl under the hood, but I'd worry that some detail leaks through. We'd probably want two scripts running similar tests, one with HTTP/2 and one with HTTP/1.1. - speaking of which, a later test fails with the patch above! The problem is that it is making sure we used a chunked transfer-encoding by looking for that header in the trace. But HTTP/2 doesn't support that, as it has its own streaming mechanisms (the overall operation works fine; we just don't see the header in the trace). Furthermore, even with the changes above, this test still does not detect the current failure, because we see _both_ HTTP/1.1 and HTTP/2 requests, which confuse it. Quoting only the interesting bits from the resulting trace file, we first see: => Send header: GET /auth/smart/repo.git/info/refs?service=git-upload-pack HTTP/1.1 => Send header: Connection: Upgrade, HTTP2-Settings => Send header: Upgrade: h2c => Send header: HTTP2-Settings: AAMAAABkAAQCAAAAAAIAAAAA <= Recv header: HTTP/1.1 401 Unauthorized <= Recv header: Date: Wed, 22 Sep 2021 20:03:32 GMT <= Recv header: Server: Apache/2.4.49 (Debian) <= Recv header: WWW-Authenticate: Basic realm="git-auth" So the client asks for HTTP/2, but Apache does not do the upgrade for the 401 response. Then the client repeats with credentials: => Send header: GET /auth/smart/repo.git/info/refs?service=git-upload-pack HTTP/1.1 => Send header: Authorization: Basic <redacted> => Send header: Connection: Upgrade, HTTP2-Settings => Send header: Upgrade: h2c => Send header: HTTP2-Settings: AAMAAABkAAQCAAAAAAIAAAAA <= Recv header: HTTP/1.1 101 Switching Protocols <= Recv header: Upgrade: h2c <= Recv header: Connection: Upgrade <= Recv header: HTTP/2 200 <= Recv header: content-type: application/x-git-upload-pack-advertisement So the client does properly redact there, because we're speaking HTTP/1.1, and the server indicates it can do the upgrade. And then the client will make further requests using HTTP/2: => Send header: POST /auth/smart/repo.git/git-upload-pack HTTP/2 => Send header: authorization: Basic dXNlckBob3N0OnBhc3NAaG9zdA== => Send header: content-type: application/x-git-upload-pack-request And there we can see that the credential is _not_ redacted. This part of the test is what gets confused: # Ensure that there is no "Basic" followed by a base64 string, but that # the auth details are redacted ! grep "Authorization: Basic [0-9a-zA-Z+/]" trace && grep "Authorization: Basic <redacted>" trace The first grep does not match the un-redacted HTTP/2 header, because it insists on an uppercase "A". And the second one does find the HTTP/1.1 header. So as far as the test is concerned, everything is OK, but it failed to notice the un-redacted lines. We can make this test (and the other related ones) more robust by adding "-i" to grep case-insensitively. This isn't really doing anything for now, since we're not actually speaking HTTP/2, but it future-proofs the tests for a day when we do (either we add explicit HTTP/2 test support, or it's eventually enabled by default by our Apache+curl test setup). And it doesn't hurt in the meantime for the tests to be more careful. The change to use "grep -i", coupled with the changes to use HTTP/2 shown above, causes the test to fail with the current code, and pass after this patch is applied. And finally, there's one other way to demonstrate the issue (and how I actually found it originally). Looking at GIT_TRACE_CURL output against github.com, you'll see the unredacted output, even if you didn't set http.version. That's because setting it is only necessary for curl to send the extra headers in its HTTP/1.1 request that say "Hey, I speak HTTP/2; upgrade if you do, too". But for a production site speaking https, the server advertises via ALPN, a TLS extension, that it supports HTTP/2, and the client can immediately start using it. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-23 00:11:53 +02:00
grep -i "Authorization: Basic [0-9a-zA-Z+/]" trace
'
test_expect_success 'disable dumb http on server' '
git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/repo.git" \
config http.getanyfile false
'
test_expect_success 'GIT_SMART_HTTP can disable smart http' '
(GIT_SMART_HTTP=0 &&
export GIT_SMART_HTTP &&
cd clone &&
test_must_fail git fetch)
'
test_expect_success 'invalid Content-Type rejected' '
test_must_fail git clone $HTTPD_URL/broken_smart/repo.git 2>actual &&
test_i18ngrep "not valid:" actual
'
test_expect_success 'create namespaced refs' '
test_commit namespaced &&
git push public HEAD:refs/namespaces/ns/refs/heads/main &&
git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/repo.git" \
symbolic-ref refs/namespaces/ns/HEAD refs/namespaces/ns/refs/heads/main
'
test_expect_success 'smart clone respects namespace' '
git clone "$HTTPD_URL/smart_namespace/repo.git" ns-smart &&
echo namespaced >expect &&
git --git-dir=ns-smart/.git log -1 --format=%s >actual &&
test_cmp expect actual
'
test_expect_success 'dumb clone via http-backend respects namespace' '
git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/repo.git" \
config http.getanyfile true &&
GIT_SMART_HTTP=0 git clone \
"$HTTPD_URL/smart_namespace/repo.git" ns-dumb &&
echo namespaced >expect &&
git --git-dir=ns-dumb/.git log -1 --format=%s >actual &&
test_cmp expect actual
'
test_expect_success 'cookies stored in http.cookiefile when http.savecookies set' '
cat >cookies.txt <<-\EOF &&
127.0.0.1 FALSE /smart_cookies/ FALSE 0 othername othervalue
EOF
sort >expect_cookies.txt <<-\EOF &&
127.0.0.1 FALSE /smart_cookies/ FALSE 0 othername othervalue
127.0.0.1 FALSE /smart_cookies/repo.git/ FALSE 0 name value
127.0.0.1 FALSE /smart_cookies/repo.git/info/ FALSE 0 name value
EOF
git config http.cookiefile cookies.txt &&
git config http.savecookies true &&
test_when_finished "
git --git-dir=\"\$HTTPD_DOCUMENT_ROOT_PATH/repo.git\" \
tag -d cookie-tag
" &&
git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/repo.git" \
tag -m "foo" cookie-tag &&
git fetch $HTTPD_URL/smart_cookies/repo.git cookie-tag &&
grep "^[^#]" cookies.txt | sort >cookies_stripped.txt &&
test_cmp expect_cookies.txt cookies_stripped.txt
'
test_expect_success 'transfer.hiderefs works over smart-http' '
test_commit hidden &&
test_commit visible &&
git push public HEAD^:refs/heads/a HEAD:refs/heads/b &&
git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH/repo.git" \
config transfer.hiderefs refs/heads/a &&
git clone --bare "$HTTPD_URL/smart/repo.git" hidden.git &&
test_must_fail git -C hidden.git rev-parse --verify a &&
git -C hidden.git rev-parse --verify b
'
# create an arbitrary number of tags, numbered from tag-$1 to tag-$2
create_tags () {
rm -f marks &&
for i in $(test_seq "$1" "$2")
do
# don't use here-doc, because it requires a process
# per loop iteration
echo "commit refs/heads/too-many-refs-$1" &&
echo "mark :$i" &&
echo "committer git <git@example.com> $i +0000" &&
echo "data 0" &&
echo "M 644 inline bla.txt" &&
echo "data 4" &&
echo "bla" &&
# make every commit dangling by always
# rewinding the branch after each commit
echo "reset refs/heads/too-many-refs-$1" &&
echo "from :$1"
done | git fast-import --export-marks=marks &&
# now assign tags to all the dangling commits we created above
tag=$(perl -e "print \"bla\" x 30") &&
sed -e "s|^:\([^ ]*\) \(.*\)$|\2 refs/tags/$tag-\1|" <marks >>packed-refs
}
test_expect_success 'create 2,000 tags in the repo' '
(
cd "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
create_tags 1 2000
)
'
test_expect_success CMDLINE_LIMIT \
'clone the 2,000 tag repo to check OS command line overflow' '
run_with_limited_cmdline git clone $HTTPD_URL/smart/repo.git too-many-refs &&
(
cd too-many-refs &&
git for-each-ref refs/tags >actual &&
test_line_count = 2000 actual
)
'
test_expect_success 'large fetch-pack requests can be sent using chunked encoding' '
GIT_TRACE_CURL=true git -c http.postbuffer=65536 \
clone --bare "$HTTPD_URL/smart/repo.git" split.git 2>err &&
t: run t5551 tests with both HTTP and HTTP/2 We have occasionally seen bugs that affect Git running only against an HTTP/2 web server, not an HTTP one. For instance, b66c77a64e (http: match headers case-insensitively when redacting, 2021-09-22). But since we have no test coverage using HTTP/2, we only uncover these bugs in the wild. That commit gives a recipe for converting our Apache setup to support HTTP/2, but: - it's not necessarily portable - we don't want to just test HTTP/2; we really want to do a variety of basic tests for _both_ protocols This patch handles both problems by running a duplicate of t5551 (labeled as t5559 here) with an alternate-universe setup that enables HTTP/2. So we'll continue to run t5551 as before, but run the same battery of tests again with HTTP/2. If HTTP/2 isn't supported on a given platform, then t5559 should bail during the webserver setup, and gracefully skip all tests (unless GIT_TEST_HTTPD has been changed from "auto" to "yes", where the point is to complain when webserver setup fails). In theory other http-related test scripts could benefit from the same duplication, but doing t5551 should give us a reasonable check of basic functionality, and would have caught both bugs we've seen in the wild with HTTP/2. A few notes on the implementation: - a script enables the server side config by calling enable_http2 before starting the webserver. This avoids even trying to load any HTTP/2 config for t5551 (which is what lets it keep working with regular HTTP even on systems that don't support it). This also sets a prereq which can be used by individual tests. - As discussed in b66c77a64e, the http2 module isn't compatible with the "prefork" mpm, so we need to pick something else. I chose "event" here, which works on my Debian system, but it's possible there are platforms which would prefer something else. We can adjust that later if somebody finds such a platform. - The test "large fetch-pack requests can be sent using chunked encoding" makes sure we use a chunked transfer-encoding by looking for that header in the trace. But since HTTP/2 has its own streaming mechanisms, we won't find such a header. We could skip the test entirely by marking it with !HTTP2. But there's some value in making sure that the fetch itself succeeded. So instead, we'll confirm that either we're using HTTP2 _or_ we saw the expected chunked header. - the redaction tests fail under HTTP/2 with recent versions of curl. This is a bug! I've marked them with !HTTP2 here to skip them under t5559 for the moment. Using test_expect_failure would be more appropriate, but would require a bunch of boilerplate. Since we'll be fixing them momentarily, let's just skip them for now to keep the test suite bisectable, and we can re-enable them in the commit that fixes the bug. - one alternative layout would be to push most of t5551 into a lib-t5551.sh script, then source it from both t5551 and t5559. Keeping t5551 intact seemed a little simpler, as its one less level of indirection for people fixing bugs/regressions in the non-HTTP/2 tests. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-11 23:35:05 +01:00
{
test_have_prereq HTTP2 ||
grep "^=> Send header: Transfer-Encoding: chunked" err
}
'
remote-curl: don't hang when a server dies before any output In the event that a HTTP server closes the connection after giving a 200 but before giving any packets, we don't want to hang forever waiting for a response that will never come. Instead, we should die immediately. One case where this happens is when attempting to fetch a dangling object by its object name. In this case, the server dies before sending any data. Prior to this patch, fetch-pack would wait for data from the server, and remote-curl would wait for fetch-pack, causing a deadlock. Despite this patch, there is other possible malformed input that could cause the same deadlock (e.g. a half-finished pktline, or a pktline but no trailing flush). There are a few possible solutions to this: 1. Allowing remote-curl to tell fetch-pack about the EOF (so that fetch-pack could know that no more data is coming until it says something else). This is tricky because an out-of-band signal would be required, or the http response would have to be re-framed inside another layer of pkt-line or something. 2. Make remote-curl understand some of the protocol. It turns out that in addition to understanding pkt-line, it would need to watch for ack/nak. This is somewhat fragile, as information about the protocol would end up in two places. Also, pkt-lines which are already at the length limit would need special handling. Both of these solutions would require a fair amount of work, whereas this hack is easy and solves at least some of the problem. Still to do: it would be good to give a better error message than "fatal: The remote end hung up unexpectedly". Signed-off-by: David Turner <dturner@twosigma.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-11-18 21:30:49 +01:00
test_expect_success 'test allowreachablesha1inwant' '
test_when_finished "rm -rf test_reachable.git" &&
server="$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
main_sha=$(git -C "$server" rev-parse refs/heads/main) &&
remote-curl: don't hang when a server dies before any output In the event that a HTTP server closes the connection after giving a 200 but before giving any packets, we don't want to hang forever waiting for a response that will never come. Instead, we should die immediately. One case where this happens is when attempting to fetch a dangling object by its object name. In this case, the server dies before sending any data. Prior to this patch, fetch-pack would wait for data from the server, and remote-curl would wait for fetch-pack, causing a deadlock. Despite this patch, there is other possible malformed input that could cause the same deadlock (e.g. a half-finished pktline, or a pktline but no trailing flush). There are a few possible solutions to this: 1. Allowing remote-curl to tell fetch-pack about the EOF (so that fetch-pack could know that no more data is coming until it says something else). This is tricky because an out-of-band signal would be required, or the http response would have to be re-framed inside another layer of pkt-line or something. 2. Make remote-curl understand some of the protocol. It turns out that in addition to understanding pkt-line, it would need to watch for ack/nak. This is somewhat fragile, as information about the protocol would end up in two places. Also, pkt-lines which are already at the length limit would need special handling. Both of these solutions would require a fair amount of work, whereas this hack is easy and solves at least some of the problem. Still to do: it would be good to give a better error message than "fatal: The remote end hung up unexpectedly". Signed-off-by: David Turner <dturner@twosigma.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-11-18 21:30:49 +01:00
git -C "$server" config uploadpack.allowreachablesha1inwant 1 &&
git init --bare test_reachable.git &&
git -C test_reachable.git remote add origin "$HTTPD_URL/smart/repo.git" &&
git -C test_reachable.git fetch origin "$main_sha"
remote-curl: don't hang when a server dies before any output In the event that a HTTP server closes the connection after giving a 200 but before giving any packets, we don't want to hang forever waiting for a response that will never come. Instead, we should die immediately. One case where this happens is when attempting to fetch a dangling object by its object name. In this case, the server dies before sending any data. Prior to this patch, fetch-pack would wait for data from the server, and remote-curl would wait for fetch-pack, causing a deadlock. Despite this patch, there is other possible malformed input that could cause the same deadlock (e.g. a half-finished pktline, or a pktline but no trailing flush). There are a few possible solutions to this: 1. Allowing remote-curl to tell fetch-pack about the EOF (so that fetch-pack could know that no more data is coming until it says something else). This is tricky because an out-of-band signal would be required, or the http response would have to be re-framed inside another layer of pkt-line or something. 2. Make remote-curl understand some of the protocol. It turns out that in addition to understanding pkt-line, it would need to watch for ack/nak. This is somewhat fragile, as information about the protocol would end up in two places. Also, pkt-lines which are already at the length limit would need special handling. Both of these solutions would require a fair amount of work, whereas this hack is easy and solves at least some of the problem. Still to do: it would be good to give a better error message than "fatal: The remote end hung up unexpectedly". Signed-off-by: David Turner <dturner@twosigma.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-11-18 21:30:49 +01:00
'
test_expect_success 'test allowreachablesha1inwant with unreachable' '
test_when_finished "rm -rf test_reachable.git; git reset --hard $(git rev-parse HEAD)" &&
#create unreachable sha
echo content >file2 &&
git add file2 &&
git commit -m two &&
git push public HEAD:refs/heads/doomed &&
git push public :refs/heads/doomed &&
server="$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
main_sha=$(git -C "$server" rev-parse refs/heads/main) &&
remote-curl: don't hang when a server dies before any output In the event that a HTTP server closes the connection after giving a 200 but before giving any packets, we don't want to hang forever waiting for a response that will never come. Instead, we should die immediately. One case where this happens is when attempting to fetch a dangling object by its object name. In this case, the server dies before sending any data. Prior to this patch, fetch-pack would wait for data from the server, and remote-curl would wait for fetch-pack, causing a deadlock. Despite this patch, there is other possible malformed input that could cause the same deadlock (e.g. a half-finished pktline, or a pktline but no trailing flush). There are a few possible solutions to this: 1. Allowing remote-curl to tell fetch-pack about the EOF (so that fetch-pack could know that no more data is coming until it says something else). This is tricky because an out-of-band signal would be required, or the http response would have to be re-framed inside another layer of pkt-line or something. 2. Make remote-curl understand some of the protocol. It turns out that in addition to understanding pkt-line, it would need to watch for ack/nak. This is somewhat fragile, as information about the protocol would end up in two places. Also, pkt-lines which are already at the length limit would need special handling. Both of these solutions would require a fair amount of work, whereas this hack is easy and solves at least some of the problem. Still to do: it would be good to give a better error message than "fatal: The remote end hung up unexpectedly". Signed-off-by: David Turner <dturner@twosigma.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-11-18 21:30:49 +01:00
git -C "$server" config uploadpack.allowreachablesha1inwant 1 &&
git init --bare test_reachable.git &&
git -C test_reachable.git remote add origin "$HTTPD_URL/smart/repo.git" &&
# Some protocol versions (e.g. 2) support fetching
# unadvertised objects, so restrict this test to v0.
test_must_fail env GIT_TEST_PROTOCOL_VERSION=0 \
git -C test_reachable.git fetch origin "$(git rev-parse HEAD)"
remote-curl: don't hang when a server dies before any output In the event that a HTTP server closes the connection after giving a 200 but before giving any packets, we don't want to hang forever waiting for a response that will never come. Instead, we should die immediately. One case where this happens is when attempting to fetch a dangling object by its object name. In this case, the server dies before sending any data. Prior to this patch, fetch-pack would wait for data from the server, and remote-curl would wait for fetch-pack, causing a deadlock. Despite this patch, there is other possible malformed input that could cause the same deadlock (e.g. a half-finished pktline, or a pktline but no trailing flush). There are a few possible solutions to this: 1. Allowing remote-curl to tell fetch-pack about the EOF (so that fetch-pack could know that no more data is coming until it says something else). This is tricky because an out-of-band signal would be required, or the http response would have to be re-framed inside another layer of pkt-line or something. 2. Make remote-curl understand some of the protocol. It turns out that in addition to understanding pkt-line, it would need to watch for ack/nak. This is somewhat fragile, as information about the protocol would end up in two places. Also, pkt-lines which are already at the length limit would need special handling. Both of these solutions would require a fair amount of work, whereas this hack is easy and solves at least some of the problem. Still to do: it would be good to give a better error message than "fatal: The remote end hung up unexpectedly". Signed-off-by: David Turner <dturner@twosigma.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-11-18 21:30:49 +01:00
'
test_expect_success 'test allowanysha1inwant with unreachable' '
test_when_finished "rm -rf test_reachable.git; git reset --hard $(git rev-parse HEAD)" &&
#create unreachable sha
echo content >file2 &&
git add file2 &&
git commit -m two &&
git push public HEAD:refs/heads/doomed &&
git push public :refs/heads/doomed &&
server="$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
main_sha=$(git -C "$server" rev-parse refs/heads/main) &&
git -C "$server" config uploadpack.allowreachablesha1inwant 1 &&
git init --bare test_reachable.git &&
git -C test_reachable.git remote add origin "$HTTPD_URL/smart/repo.git" &&
# Some protocol versions (e.g. 2) support fetching
# unadvertised objects, so restrict this test to v0.
test_must_fail env GIT_TEST_PROTOCOL_VERSION=0 \
git -C test_reachable.git fetch origin "$(git rev-parse HEAD)" &&
git -C "$server" config uploadpack.allowanysha1inwant 1 &&
git -C test_reachable.git fetch origin "$(git rev-parse HEAD)"
'
http-backend: spool ref negotiation requests to buffer When http-backend spawns "upload-pack" to do ref negotiation, it streams the http request body to upload-pack, who then streams the http response back to the client as it reads. In theory, git can go full-duplex; the client can consume our response while it is still sending the request. In practice, however, HTTP is a half-duplex protocol. Even if our client is ready to read and write simultaneously, we may have other HTTP infrastructure in the way, including the webserver that spawns our CGI, or any intermediate proxies. In at least one documented case[1], this leads to deadlock when trying a fetch over http. What happens is basically: 1. Apache proxies the request to the CGI, http-backend. 2. http-backend gzip-inflates the data and sends the result to upload-pack. 3. upload-pack acts on the data and generates output over the pipe back to Apache. Apache isn't reading because it's busy writing (step 1). This works fine most of the time, because the upload-pack output ends up in a system pipe buffer, and Apache reads it as soon as it finishes writing. But if both the request and the response exceed the system pipe buffer size, then we deadlock (Apache blocks writing to http-backend, http-backend blocks writing to upload-pack, and upload-pack blocks writing to Apache). We need to break the deadlock by spooling either the input or the output. In this case, it's ideal to spool the input, because Apache does not start reading either stdout _or_ stderr until we have consumed all of the input. So until we do so, we cannot even get an error message out to the client. The solution is fairly straight-forward: we read the request body into an in-memory buffer in http-backend, freeing up Apache, and then feed the data ourselves to upload-pack. But there are a few important things to note: 1. We limit the in-memory buffer to prevent an obvious denial-of-service attack. This is a new hard limit on requests, but it's unlikely to come into play. The default value is 10MB, which covers even the ridiculous 100,000-ref negotation in the included test (that actually caps out just over 5MB). But it's configurable on the off chance that you don't mind spending some extra memory to make even ridiculous requests work. 2. We must take care only to buffer when we have to. For pushes, the incoming packfile may be of arbitrary size, and we should connect the input directly to receive-pack. There's no deadlock problem here, though, because we do not produce any output until the whole packfile has been read. For upload-pack's initial ref advertisement, we similarly do not need to buffer. Even though we may generate a lot of output, there is no request body at all (i.e., it is a GET, not a POST). [1] http://article.gmane.org/gmane.comp.version-control.git/269020 Test-adapted-from: Dennis Kaarsemaker <dennis@kaarsemaker.net> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-20 09:37:09 +02:00
test_expect_success EXPENSIVE 'http can handle enormous ref negotiation' '
(
cd "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
create_tags 2001 50000
) &&
http-backend: spool ref negotiation requests to buffer When http-backend spawns "upload-pack" to do ref negotiation, it streams the http request body to upload-pack, who then streams the http response back to the client as it reads. In theory, git can go full-duplex; the client can consume our response while it is still sending the request. In practice, however, HTTP is a half-duplex protocol. Even if our client is ready to read and write simultaneously, we may have other HTTP infrastructure in the way, including the webserver that spawns our CGI, or any intermediate proxies. In at least one documented case[1], this leads to deadlock when trying a fetch over http. What happens is basically: 1. Apache proxies the request to the CGI, http-backend. 2. http-backend gzip-inflates the data and sends the result to upload-pack. 3. upload-pack acts on the data and generates output over the pipe back to Apache. Apache isn't reading because it's busy writing (step 1). This works fine most of the time, because the upload-pack output ends up in a system pipe buffer, and Apache reads it as soon as it finishes writing. But if both the request and the response exceed the system pipe buffer size, then we deadlock (Apache blocks writing to http-backend, http-backend blocks writing to upload-pack, and upload-pack blocks writing to Apache). We need to break the deadlock by spooling either the input or the output. In this case, it's ideal to spool the input, because Apache does not start reading either stdout _or_ stderr until we have consumed all of the input. So until we do so, we cannot even get an error message out to the client. The solution is fairly straight-forward: we read the request body into an in-memory buffer in http-backend, freeing up Apache, and then feed the data ourselves to upload-pack. But there are a few important things to note: 1. We limit the in-memory buffer to prevent an obvious denial-of-service attack. This is a new hard limit on requests, but it's unlikely to come into play. The default value is 10MB, which covers even the ridiculous 100,000-ref negotation in the included test (that actually caps out just over 5MB). But it's configurable on the off chance that you don't mind spending some extra memory to make even ridiculous requests work. 2. We must take care only to buffer when we have to. For pushes, the incoming packfile may be of arbitrary size, and we should connect the input directly to receive-pack. There's no deadlock problem here, though, because we do not produce any output until the whole packfile has been read. For upload-pack's initial ref advertisement, we similarly do not need to buffer. Even though we may generate a lot of output, there is no request body at all (i.e., it is a GET, not a POST). [1] http://article.gmane.org/gmane.comp.version-control.git/269020 Test-adapted-from: Dennis Kaarsemaker <dennis@kaarsemaker.net> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-20 09:37:09 +02:00
git -C too-many-refs fetch -q --tags &&
(
cd "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
create_tags 50001 100000
) &&
git -C too-many-refs fetch -q --tags &&
git -C too-many-refs for-each-ref refs/tags >tags &&
test_line_count = 100000 tags
'
test_expect_success 'custom http headers' '
test_must_fail git -c http.extraheader="x-magic-two: cadabra" \
fetch "$HTTPD_URL/smart_headers/repo.git" &&
git -c http.extraheader="x-magic-one: abra" \
-c http.extraheader="x-magic-two: cadabra" \
fetch "$HTTPD_URL/smart_headers/repo.git" &&
git update-index --add --cacheinfo 160000,$(git rev-parse HEAD),sub &&
git config -f .gitmodules submodule.sub.path sub &&
git config -f .gitmodules submodule.sub.url \
"$HTTPD_URL/smart_headers/repo.git" &&
git submodule init sub &&
test_must_fail git submodule update sub &&
git -c http.extraheader="x-magic-one: abra" \
-c http.extraheader="x-magic-two: cadabra" \
submodule update sub
'
fetch-pack: unify ref in and out param When a user fetches: - at least one up-to-date ref and at least one non-up-to-date ref, - using HTTP with protocol v0 (or something else that uses the fetch command of a remote helper) some refs might not be updated after the fetch. This bug was introduced in commit 989b8c4452 ("fetch-pack: put shallow info in output parameter", 2018-06-28) which allowed transports to report the refs that they have fetched in a new out-parameter "fetched_refs". If they do so, transport_fetch_refs() makes this information available to its caller. Users of "fetched_refs" rely on the following 3 properties: (1) it is the complete list of refs that was passed to transport_fetch_refs(), (2) it has shallow information (REF_STATUS_REJECT_SHALLOW set if relevant), and (3) it has updated OIDs if ref-in-want was used (introduced after 989b8c4452). In an effort to satisfy (1), whenever transport_fetch_refs() filters the refs sent to the transport, it re-adds the filtered refs to whatever the transport supplies before returning it to the user. However, the implementation in 989b8c4452 unconditionally re-adds the filtered refs without checking if the transport refrained from reporting anything in "fetched_refs" (which it is allowed to do), resulting in an incomplete list, no longer satisfying (1). An earlier effort to resolve this [1] solved the issue by readding the filtered refs only if the transport did not refrain from reporting in "fetched_refs", but after further discussion, it seems that the better solution is to revert the API change that introduced "fetched_refs". This API change was first suggested as part of a ref-in-want implementation that allowed for ref patterns and, thus, there could be drastic differences between the input refs and the refs actually fetched [2]; we eventually decided to only allow exact ref names, but this API change remained even though its necessity was decreased. Therefore, revert this API change by reverting commit 989b8c4452, and make receive_wanted_refs() update the OIDs in the sought array (like how update_shallow() updates shallow information in the sought array) instead. A test is also included to show that the user-visible bug discussed at the beginning of this commit message no longer exists. [1] https://public-inbox.org/git/20180801171806.GA122458@google.com/ [2] https://public-inbox.org/git/86a128c5fb710a41791e7183207c4d64889f9307.1485381677.git.jonathantanmy@google.com/ Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-08-01 22:13:20 +02:00
test_expect_success 'using fetch command in remote-curl updates refs' '
SERVER="$HTTPD_DOCUMENT_ROOT_PATH/twobranch" &&
rm -rf "$SERVER" client &&
git init "$SERVER" &&
test_commit -C "$SERVER" foo &&
git -C "$SERVER" update-ref refs/heads/anotherbranch foo &&
git clone $HTTPD_URL/smart/twobranch client &&
test_commit -C "$SERVER" bar &&
git -C client -c protocol.version=0 fetch &&
git -C "$SERVER" rev-parse main >expect &&
git -C client rev-parse origin/main >actual &&
fetch-pack: unify ref in and out param When a user fetches: - at least one up-to-date ref and at least one non-up-to-date ref, - using HTTP with protocol v0 (or something else that uses the fetch command of a remote helper) some refs might not be updated after the fetch. This bug was introduced in commit 989b8c4452 ("fetch-pack: put shallow info in output parameter", 2018-06-28) which allowed transports to report the refs that they have fetched in a new out-parameter "fetched_refs". If they do so, transport_fetch_refs() makes this information available to its caller. Users of "fetched_refs" rely on the following 3 properties: (1) it is the complete list of refs that was passed to transport_fetch_refs(), (2) it has shallow information (REF_STATUS_REJECT_SHALLOW set if relevant), and (3) it has updated OIDs if ref-in-want was used (introduced after 989b8c4452). In an effort to satisfy (1), whenever transport_fetch_refs() filters the refs sent to the transport, it re-adds the filtered refs to whatever the transport supplies before returning it to the user. However, the implementation in 989b8c4452 unconditionally re-adds the filtered refs without checking if the transport refrained from reporting anything in "fetched_refs" (which it is allowed to do), resulting in an incomplete list, no longer satisfying (1). An earlier effort to resolve this [1] solved the issue by readding the filtered refs only if the transport did not refrain from reporting in "fetched_refs", but after further discussion, it seems that the better solution is to revert the API change that introduced "fetched_refs". This API change was first suggested as part of a ref-in-want implementation that allowed for ref patterns and, thus, there could be drastic differences between the input refs and the refs actually fetched [2]; we eventually decided to only allow exact ref names, but this API change remained even though its necessity was decreased. Therefore, revert this API change by reverting commit 989b8c4452, and make receive_wanted_refs() update the OIDs in the sought array (like how update_shallow() updates shallow information in the sought array) instead. A test is also included to show that the user-visible bug discussed at the beginning of this commit message no longer exists. [1] https://public-inbox.org/git/20180801171806.GA122458@google.com/ [2] https://public-inbox.org/git/86a128c5fb710a41791e7183207c4d64889f9307.1485381677.git.jonathantanmy@google.com/ Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-08-01 22:13:20 +02:00
test_cmp expect actual
'
test_expect_success 'fetch by SHA-1 without tag following' '
SERVER="$HTTPD_DOCUMENT_ROOT_PATH/server" &&
rm -rf "$SERVER" client &&
git init "$SERVER" &&
test_commit -C "$SERVER" foo &&
git clone $HTTPD_URL/smart/server client &&
test_commit -C "$SERVER" bar &&
git -C "$SERVER" rev-parse bar >bar_hash &&
git -C client -c protocol.version=0 fetch \
--no-tags origin $(cat bar_hash)
'
test_expect_success 'cookies are redacted by default' '
rm -rf clone &&
echo "Set-Cookie: Foo=1" >cookies &&
echo "Set-Cookie: Bar=2" >>cookies &&
GIT_TRACE_CURL=true \
git -c "http.cookieFile=$(pwd)/cookies" clone \
$HTTPD_URL/smart/repo.git clone 2>err &&
http: match headers case-insensitively when redacting When HTTP/2 is in use, we fail to correctly redact "Authorization" (and other) headers in our GIT_TRACE_CURL output. We get the headers in our CURLOPT_DEBUGFUNCTION callback, curl_trace(). It passes them along to curl_dump_header(), which in turn checks redact_sensitive_header(). We see the headers as a text buffer like: Host: ... Authorization: Basic ... After breaking it into lines, we match each header using skip_prefix(). This is case-sensitive, even though HTTP headers are case-insensitive. This has worked reliably in the past because these headers are generated by curl itself, which is predictable in what it sends. But when HTTP/2 is in use, instead we get a lower-case "authorization:" header, and we fail to match it. The fix is simple: we should match with skip_iprefix(). Testing is more complicated, though. We do have a test for the redacting feature, but we don't hit the problem case because our test Apache setup does not understand HTTP/2. You can reproduce the issue by applying this on top of the test change in this patch: diff --git a/t/lib-httpd/apache.conf b/t/lib-httpd/apache.conf index afa91e38b0..19267c7107 100644 --- a/t/lib-httpd/apache.conf +++ b/t/lib-httpd/apache.conf @@ -29,6 +29,9 @@ ErrorLog error.log LoadModule setenvif_module modules/mod_setenvif.so </IfModule> +LoadModule http2_module modules/mod_http2.so +Protocols h2c + <IfVersion < 2.4> LockFile accept.lock </IfVersion> @@ -64,8 +67,8 @@ LockFile accept.lock <IfModule !mod_access_compat.c> LoadModule access_compat_module modules/mod_access_compat.so </IfModule> -<IfModule !mod_mpm_prefork.c> - LoadModule mpm_prefork_module modules/mod_mpm_prefork.so +<IfModule !mod_mpm_event.c> + LoadModule mpm_event_module modules/mod_mpm_event.so </IfModule> <IfModule !mod_unixd.c> LoadModule unixd_module modules/mod_unixd.so diff --git a/t/t5551-http-fetch-smart.sh b/t/t5551-http-fetch-smart.sh index 1c2a444ae7..ff74f0ae8a 100755 --- a/t/t5551-http-fetch-smart.sh +++ b/t/t5551-http-fetch-smart.sh @@ -24,6 +24,10 @@ test_expect_success 'create http-accessible bare repository' ' git push public main:main ' +test_expect_success 'prefer http/2' ' + git config --global http.version HTTP/2 +' + setup_askpass_helper test_expect_success 'clone http repository' ' but this has a few issues: - it's not necessarily portable. The http2 apache module might not be available on all systems. Further, the http2 module isn't compatible with the prefork mpm, so we have to switch to something else. But we don't necessarily know what's available. It would be nice if we could have conditional config, but IfModule only tells us if a module is already loaded, not whether it is available at all. This might be a non-issue. The http tests are already optional, and modern-enough systems may just have both of these. But... - if we do this, then we'd no longer be testing HTTP/1.1 at all. I'm not sure how much that matters since it's all handled by curl under the hood, but I'd worry that some detail leaks through. We'd probably want two scripts running similar tests, one with HTTP/2 and one with HTTP/1.1. - speaking of which, a later test fails with the patch above! The problem is that it is making sure we used a chunked transfer-encoding by looking for that header in the trace. But HTTP/2 doesn't support that, as it has its own streaming mechanisms (the overall operation works fine; we just don't see the header in the trace). Furthermore, even with the changes above, this test still does not detect the current failure, because we see _both_ HTTP/1.1 and HTTP/2 requests, which confuse it. Quoting only the interesting bits from the resulting trace file, we first see: => Send header: GET /auth/smart/repo.git/info/refs?service=git-upload-pack HTTP/1.1 => Send header: Connection: Upgrade, HTTP2-Settings => Send header: Upgrade: h2c => Send header: HTTP2-Settings: AAMAAABkAAQCAAAAAAIAAAAA <= Recv header: HTTP/1.1 401 Unauthorized <= Recv header: Date: Wed, 22 Sep 2021 20:03:32 GMT <= Recv header: Server: Apache/2.4.49 (Debian) <= Recv header: WWW-Authenticate: Basic realm="git-auth" So the client asks for HTTP/2, but Apache does not do the upgrade for the 401 response. Then the client repeats with credentials: => Send header: GET /auth/smart/repo.git/info/refs?service=git-upload-pack HTTP/1.1 => Send header: Authorization: Basic <redacted> => Send header: Connection: Upgrade, HTTP2-Settings => Send header: Upgrade: h2c => Send header: HTTP2-Settings: AAMAAABkAAQCAAAAAAIAAAAA <= Recv header: HTTP/1.1 101 Switching Protocols <= Recv header: Upgrade: h2c <= Recv header: Connection: Upgrade <= Recv header: HTTP/2 200 <= Recv header: content-type: application/x-git-upload-pack-advertisement So the client does properly redact there, because we're speaking HTTP/1.1, and the server indicates it can do the upgrade. And then the client will make further requests using HTTP/2: => Send header: POST /auth/smart/repo.git/git-upload-pack HTTP/2 => Send header: authorization: Basic dXNlckBob3N0OnBhc3NAaG9zdA== => Send header: content-type: application/x-git-upload-pack-request And there we can see that the credential is _not_ redacted. This part of the test is what gets confused: # Ensure that there is no "Basic" followed by a base64 string, but that # the auth details are redacted ! grep "Authorization: Basic [0-9a-zA-Z+/]" trace && grep "Authorization: Basic <redacted>" trace The first grep does not match the un-redacted HTTP/2 header, because it insists on an uppercase "A". And the second one does find the HTTP/1.1 header. So as far as the test is concerned, everything is OK, but it failed to notice the un-redacted lines. We can make this test (and the other related ones) more robust by adding "-i" to grep case-insensitively. This isn't really doing anything for now, since we're not actually speaking HTTP/2, but it future-proofs the tests for a day when we do (either we add explicit HTTP/2 test support, or it's eventually enabled by default by our Apache+curl test setup). And it doesn't hurt in the meantime for the tests to be more careful. The change to use "grep -i", coupled with the changes to use HTTP/2 shown above, causes the test to fail with the current code, and pass after this patch is applied. And finally, there's one other way to demonstrate the issue (and how I actually found it originally). Looking at GIT_TRACE_CURL output against github.com, you'll see the unredacted output, even if you didn't set http.version. That's because setting it is only necessary for curl to send the extra headers in its HTTP/1.1 request that say "Hey, I speak HTTP/2; upgrade if you do, too". But for a production site speaking https, the server advertises via ALPN, a TLS extension, that it supports HTTP/2, and the client can immediately start using it. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-23 00:11:53 +02:00
grep -i "Cookie:.*Foo=<redacted>" err &&
grep -i "Cookie:.*Bar=<redacted>" err &&
! grep -i "Cookie:.*Foo=1" err &&
! grep -i "Cookie:.*Bar=2" err
'
test_expect_success 'empty values of cookies are also redacted' '
rm -rf clone &&
echo "Set-Cookie: Foo=" >cookies &&
GIT_TRACE_CURL=true \
git -c "http.cookieFile=$(pwd)/cookies" clone \
$HTTPD_URL/smart/repo.git clone 2>err &&
http: match headers case-insensitively when redacting When HTTP/2 is in use, we fail to correctly redact "Authorization" (and other) headers in our GIT_TRACE_CURL output. We get the headers in our CURLOPT_DEBUGFUNCTION callback, curl_trace(). It passes them along to curl_dump_header(), which in turn checks redact_sensitive_header(). We see the headers as a text buffer like: Host: ... Authorization: Basic ... After breaking it into lines, we match each header using skip_prefix(). This is case-sensitive, even though HTTP headers are case-insensitive. This has worked reliably in the past because these headers are generated by curl itself, which is predictable in what it sends. But when HTTP/2 is in use, instead we get a lower-case "authorization:" header, and we fail to match it. The fix is simple: we should match with skip_iprefix(). Testing is more complicated, though. We do have a test for the redacting feature, but we don't hit the problem case because our test Apache setup does not understand HTTP/2. You can reproduce the issue by applying this on top of the test change in this patch: diff --git a/t/lib-httpd/apache.conf b/t/lib-httpd/apache.conf index afa91e38b0..19267c7107 100644 --- a/t/lib-httpd/apache.conf +++ b/t/lib-httpd/apache.conf @@ -29,6 +29,9 @@ ErrorLog error.log LoadModule setenvif_module modules/mod_setenvif.so </IfModule> +LoadModule http2_module modules/mod_http2.so +Protocols h2c + <IfVersion < 2.4> LockFile accept.lock </IfVersion> @@ -64,8 +67,8 @@ LockFile accept.lock <IfModule !mod_access_compat.c> LoadModule access_compat_module modules/mod_access_compat.so </IfModule> -<IfModule !mod_mpm_prefork.c> - LoadModule mpm_prefork_module modules/mod_mpm_prefork.so +<IfModule !mod_mpm_event.c> + LoadModule mpm_event_module modules/mod_mpm_event.so </IfModule> <IfModule !mod_unixd.c> LoadModule unixd_module modules/mod_unixd.so diff --git a/t/t5551-http-fetch-smart.sh b/t/t5551-http-fetch-smart.sh index 1c2a444ae7..ff74f0ae8a 100755 --- a/t/t5551-http-fetch-smart.sh +++ b/t/t5551-http-fetch-smart.sh @@ -24,6 +24,10 @@ test_expect_success 'create http-accessible bare repository' ' git push public main:main ' +test_expect_success 'prefer http/2' ' + git config --global http.version HTTP/2 +' + setup_askpass_helper test_expect_success 'clone http repository' ' but this has a few issues: - it's not necessarily portable. The http2 apache module might not be available on all systems. Further, the http2 module isn't compatible with the prefork mpm, so we have to switch to something else. But we don't necessarily know what's available. It would be nice if we could have conditional config, but IfModule only tells us if a module is already loaded, not whether it is available at all. This might be a non-issue. The http tests are already optional, and modern-enough systems may just have both of these. But... - if we do this, then we'd no longer be testing HTTP/1.1 at all. I'm not sure how much that matters since it's all handled by curl under the hood, but I'd worry that some detail leaks through. We'd probably want two scripts running similar tests, one with HTTP/2 and one with HTTP/1.1. - speaking of which, a later test fails with the patch above! The problem is that it is making sure we used a chunked transfer-encoding by looking for that header in the trace. But HTTP/2 doesn't support that, as it has its own streaming mechanisms (the overall operation works fine; we just don't see the header in the trace). Furthermore, even with the changes above, this test still does not detect the current failure, because we see _both_ HTTP/1.1 and HTTP/2 requests, which confuse it. Quoting only the interesting bits from the resulting trace file, we first see: => Send header: GET /auth/smart/repo.git/info/refs?service=git-upload-pack HTTP/1.1 => Send header: Connection: Upgrade, HTTP2-Settings => Send header: Upgrade: h2c => Send header: HTTP2-Settings: AAMAAABkAAQCAAAAAAIAAAAA <= Recv header: HTTP/1.1 401 Unauthorized <= Recv header: Date: Wed, 22 Sep 2021 20:03:32 GMT <= Recv header: Server: Apache/2.4.49 (Debian) <= Recv header: WWW-Authenticate: Basic realm="git-auth" So the client asks for HTTP/2, but Apache does not do the upgrade for the 401 response. Then the client repeats with credentials: => Send header: GET /auth/smart/repo.git/info/refs?service=git-upload-pack HTTP/1.1 => Send header: Authorization: Basic <redacted> => Send header: Connection: Upgrade, HTTP2-Settings => Send header: Upgrade: h2c => Send header: HTTP2-Settings: AAMAAABkAAQCAAAAAAIAAAAA <= Recv header: HTTP/1.1 101 Switching Protocols <= Recv header: Upgrade: h2c <= Recv header: Connection: Upgrade <= Recv header: HTTP/2 200 <= Recv header: content-type: application/x-git-upload-pack-advertisement So the client does properly redact there, because we're speaking HTTP/1.1, and the server indicates it can do the upgrade. And then the client will make further requests using HTTP/2: => Send header: POST /auth/smart/repo.git/git-upload-pack HTTP/2 => Send header: authorization: Basic dXNlckBob3N0OnBhc3NAaG9zdA== => Send header: content-type: application/x-git-upload-pack-request And there we can see that the credential is _not_ redacted. This part of the test is what gets confused: # Ensure that there is no "Basic" followed by a base64 string, but that # the auth details are redacted ! grep "Authorization: Basic [0-9a-zA-Z+/]" trace && grep "Authorization: Basic <redacted>" trace The first grep does not match the un-redacted HTTP/2 header, because it insists on an uppercase "A". And the second one does find the HTTP/1.1 header. So as far as the test is concerned, everything is OK, but it failed to notice the un-redacted lines. We can make this test (and the other related ones) more robust by adding "-i" to grep case-insensitively. This isn't really doing anything for now, since we're not actually speaking HTTP/2, but it future-proofs the tests for a day when we do (either we add explicit HTTP/2 test support, or it's eventually enabled by default by our Apache+curl test setup). And it doesn't hurt in the meantime for the tests to be more careful. The change to use "grep -i", coupled with the changes to use HTTP/2 shown above, causes the test to fail with the current code, and pass after this patch is applied. And finally, there's one other way to demonstrate the issue (and how I actually found it originally). Looking at GIT_TRACE_CURL output against github.com, you'll see the unredacted output, even if you didn't set http.version. That's because setting it is only necessary for curl to send the extra headers in its HTTP/1.1 request that say "Hey, I speak HTTP/2; upgrade if you do, too". But for a production site speaking https, the server advertises via ALPN, a TLS extension, that it supports HTTP/2, and the client can immediately start using it. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-23 00:11:53 +02:00
grep -i "Cookie:.*Foo=<redacted>" err
'
test_expect_success 'GIT_TRACE_REDACT=0 disables cookie redaction' '
rm -rf clone &&
echo "Set-Cookie: Foo=1" >cookies &&
echo "Set-Cookie: Bar=2" >>cookies &&
GIT_TRACE_REDACT=0 GIT_TRACE_CURL=true \
git -c "http.cookieFile=$(pwd)/cookies" clone \
$HTTPD_URL/smart/repo.git clone 2>err &&
http: match headers case-insensitively when redacting When HTTP/2 is in use, we fail to correctly redact "Authorization" (and other) headers in our GIT_TRACE_CURL output. We get the headers in our CURLOPT_DEBUGFUNCTION callback, curl_trace(). It passes them along to curl_dump_header(), which in turn checks redact_sensitive_header(). We see the headers as a text buffer like: Host: ... Authorization: Basic ... After breaking it into lines, we match each header using skip_prefix(). This is case-sensitive, even though HTTP headers are case-insensitive. This has worked reliably in the past because these headers are generated by curl itself, which is predictable in what it sends. But when HTTP/2 is in use, instead we get a lower-case "authorization:" header, and we fail to match it. The fix is simple: we should match with skip_iprefix(). Testing is more complicated, though. We do have a test for the redacting feature, but we don't hit the problem case because our test Apache setup does not understand HTTP/2. You can reproduce the issue by applying this on top of the test change in this patch: diff --git a/t/lib-httpd/apache.conf b/t/lib-httpd/apache.conf index afa91e38b0..19267c7107 100644 --- a/t/lib-httpd/apache.conf +++ b/t/lib-httpd/apache.conf @@ -29,6 +29,9 @@ ErrorLog error.log LoadModule setenvif_module modules/mod_setenvif.so </IfModule> +LoadModule http2_module modules/mod_http2.so +Protocols h2c + <IfVersion < 2.4> LockFile accept.lock </IfVersion> @@ -64,8 +67,8 @@ LockFile accept.lock <IfModule !mod_access_compat.c> LoadModule access_compat_module modules/mod_access_compat.so </IfModule> -<IfModule !mod_mpm_prefork.c> - LoadModule mpm_prefork_module modules/mod_mpm_prefork.so +<IfModule !mod_mpm_event.c> + LoadModule mpm_event_module modules/mod_mpm_event.so </IfModule> <IfModule !mod_unixd.c> LoadModule unixd_module modules/mod_unixd.so diff --git a/t/t5551-http-fetch-smart.sh b/t/t5551-http-fetch-smart.sh index 1c2a444ae7..ff74f0ae8a 100755 --- a/t/t5551-http-fetch-smart.sh +++ b/t/t5551-http-fetch-smart.sh @@ -24,6 +24,10 @@ test_expect_success 'create http-accessible bare repository' ' git push public main:main ' +test_expect_success 'prefer http/2' ' + git config --global http.version HTTP/2 +' + setup_askpass_helper test_expect_success 'clone http repository' ' but this has a few issues: - it's not necessarily portable. The http2 apache module might not be available on all systems. Further, the http2 module isn't compatible with the prefork mpm, so we have to switch to something else. But we don't necessarily know what's available. It would be nice if we could have conditional config, but IfModule only tells us if a module is already loaded, not whether it is available at all. This might be a non-issue. The http tests are already optional, and modern-enough systems may just have both of these. But... - if we do this, then we'd no longer be testing HTTP/1.1 at all. I'm not sure how much that matters since it's all handled by curl under the hood, but I'd worry that some detail leaks through. We'd probably want two scripts running similar tests, one with HTTP/2 and one with HTTP/1.1. - speaking of which, a later test fails with the patch above! The problem is that it is making sure we used a chunked transfer-encoding by looking for that header in the trace. But HTTP/2 doesn't support that, as it has its own streaming mechanisms (the overall operation works fine; we just don't see the header in the trace). Furthermore, even with the changes above, this test still does not detect the current failure, because we see _both_ HTTP/1.1 and HTTP/2 requests, which confuse it. Quoting only the interesting bits from the resulting trace file, we first see: => Send header: GET /auth/smart/repo.git/info/refs?service=git-upload-pack HTTP/1.1 => Send header: Connection: Upgrade, HTTP2-Settings => Send header: Upgrade: h2c => Send header: HTTP2-Settings: AAMAAABkAAQCAAAAAAIAAAAA <= Recv header: HTTP/1.1 401 Unauthorized <= Recv header: Date: Wed, 22 Sep 2021 20:03:32 GMT <= Recv header: Server: Apache/2.4.49 (Debian) <= Recv header: WWW-Authenticate: Basic realm="git-auth" So the client asks for HTTP/2, but Apache does not do the upgrade for the 401 response. Then the client repeats with credentials: => Send header: GET /auth/smart/repo.git/info/refs?service=git-upload-pack HTTP/1.1 => Send header: Authorization: Basic <redacted> => Send header: Connection: Upgrade, HTTP2-Settings => Send header: Upgrade: h2c => Send header: HTTP2-Settings: AAMAAABkAAQCAAAAAAIAAAAA <= Recv header: HTTP/1.1 101 Switching Protocols <= Recv header: Upgrade: h2c <= Recv header: Connection: Upgrade <= Recv header: HTTP/2 200 <= Recv header: content-type: application/x-git-upload-pack-advertisement So the client does properly redact there, because we're speaking HTTP/1.1, and the server indicates it can do the upgrade. And then the client will make further requests using HTTP/2: => Send header: POST /auth/smart/repo.git/git-upload-pack HTTP/2 => Send header: authorization: Basic dXNlckBob3N0OnBhc3NAaG9zdA== => Send header: content-type: application/x-git-upload-pack-request And there we can see that the credential is _not_ redacted. This part of the test is what gets confused: # Ensure that there is no "Basic" followed by a base64 string, but that # the auth details are redacted ! grep "Authorization: Basic [0-9a-zA-Z+/]" trace && grep "Authorization: Basic <redacted>" trace The first grep does not match the un-redacted HTTP/2 header, because it insists on an uppercase "A". And the second one does find the HTTP/1.1 header. So as far as the test is concerned, everything is OK, but it failed to notice the un-redacted lines. We can make this test (and the other related ones) more robust by adding "-i" to grep case-insensitively. This isn't really doing anything for now, since we're not actually speaking HTTP/2, but it future-proofs the tests for a day when we do (either we add explicit HTTP/2 test support, or it's eventually enabled by default by our Apache+curl test setup). And it doesn't hurt in the meantime for the tests to be more careful. The change to use "grep -i", coupled with the changes to use HTTP/2 shown above, causes the test to fail with the current code, and pass after this patch is applied. And finally, there's one other way to demonstrate the issue (and how I actually found it originally). Looking at GIT_TRACE_CURL output against github.com, you'll see the unredacted output, even if you didn't set http.version. That's because setting it is only necessary for curl to send the extra headers in its HTTP/1.1 request that say "Hey, I speak HTTP/2; upgrade if you do, too". But for a production site speaking https, the server advertises via ALPN, a TLS extension, that it supports HTTP/2, and the client can immediately start using it. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-23 00:11:53 +02:00
grep -i "Cookie:.*Foo=1" err &&
grep -i "Cookie:.*Bar=2" err
'
test_expect_success 'GIT_TRACE_CURL_NO_DATA prevents data from being traced' '
rm -rf clone &&
GIT_TRACE_CURL=true \
git clone $HTTPD_URL/smart/repo.git clone 2>err &&
grep "=> Send data" err &&
rm -rf clone &&
GIT_TRACE_CURL=true GIT_TRACE_CURL_NO_DATA=1 \
git clone $HTTPD_URL/smart/repo.git clone 2>err &&
! grep "=> Send data" err
'
test_expect_success 'server-side error detected' '
test_must_fail git clone $HTTPD_URL/error_smart/repo.git 2>actual &&
test_i18ngrep "server-side error" actual
'
test_expect_success 'http auth remembers successful credentials' '
rm -f .git-credentials &&
test_config credential.helper store &&
# the first request prompts the user...
set_askpass user@host pass@host &&
git ls-remote "$HTTPD_URL/auth/smart/repo.git" >/dev/null &&
expect_askpass both user@host &&
# ...and the second one uses the stored value rather than
# prompting the user.
set_askpass bogus-user bogus-pass &&
git ls-remote "$HTTPD_URL/auth/smart/repo.git" >/dev/null &&
expect_askpass none
'
test_expect_success 'http auth forgets bogus credentials' '
# seed credential store with bogus values. In real life,
# this would probably come from a password which worked
# for a previous request.
rm -f .git-credentials &&
test_config credential.helper store &&
{
echo "url=$HTTPD_URL" &&
echo "username=bogus" &&
echo "password=bogus"
} | git credential approve &&
# we expect this to use the bogus values and fail, never even
# prompting the user...
set_askpass user@host pass@host &&
test_must_fail git ls-remote "$HTTPD_URL/auth/smart/repo.git" >/dev/null &&
expect_askpass none &&
# ...but now we should have forgotten the bad value, causing
# us to prompt the user again.
set_askpass user@host pass@host &&
git ls-remote "$HTTPD_URL/auth/smart/repo.git" >/dev/null &&
expect_askpass both user@host
'
test_expect_success 'client falls back from v2 to v0 to match server' '
GIT_TRACE_PACKET=$PWD/trace \
GIT_TEST_PROTOCOL_VERSION=2 \
git clone $HTTPD_URL/smart_v0/repo.git repo-v0 &&
# check for v0; there the HEAD symref is communicated in the capability
# line; v2 uses a different syntax on each ref advertisement line
grep symref=HEAD:refs/heads/ trace
'
test_expect_success 'passing hostname resolution information works' '
BOGUS_HOST=gitbogusexamplehost.invalid &&
BOGUS_HTTPD_URL=$HTTPD_PROTO://$BOGUS_HOST:$LIB_HTTPD_PORT &&
test_must_fail git ls-remote "$BOGUS_HTTPD_URL/smart/repo.git" >/dev/null &&
git -c "http.curloptResolve=$BOGUS_HOST:$LIB_HTTPD_PORT:127.0.0.1" ls-remote "$BOGUS_HTTPD_URL/smart/repo.git" >/dev/null
'
t5516: move plaintext-password tests from t5601 and t5516 Commit 6dcbdc0d66 (remote: create fetch.credentialsInUrl config, 2022-06-06) added tests for our handling of passwords in URLs. Since the obvious URL to be affected is git-over-http, the tests use http. However they don't set up a test server; they just try to access https://localhost, assuming it will fail (because the nothing is listening there). This causes some possible problems: - There might be a web server running on localhost, and we do not actually want to connect to that. - The DNS resolver, or the local firewall, might take a substantial amount of time (or forever, whichever comes first) to fail to connect, slowing down the tests cases unnecessarily. - Since there's no server, our tests for "allow" and "warn" still expect the clone/fetch/push operations to fail, even though in the real world we'd expect these to succeed. We scrape stderr to see what happened, but it's not as robust as a more realistic test. Let's instead move these to t5551, which is all about testing http and where we have a real server. That eliminates any issues with contacting a strange URL, and lets the "allow" and "warn" tests confirm that the operation actually succeeds. It's not quite a verbatim move for a few reasons: - we can drop the LIBCURL dependency; it's already part of lib-httpd.sh - we'll use HTTPD_URL_USER_PASS, etc, instead of our fake URL. To avoid repetition, we'll add a few extra variables. - the "https://username:@localhost" test uses a funny URL that lib-httpd.sh doesn't provide. We'll similarly construct it in a variable. Note that we're hard-coding the lib-httpd username here, but t5551 already does that everywhere. - for the "domain:port" test, the URL provided by lib-httpd is fine, since our test server will always be on an exotic port. But we'll confirm in the test that this is so. - since our message-matching is done via grep, I simplified it to use a regex, rather than trying to massage lib-httpd's variables. Arguably this makes it more readable, too, while retaining the bits we care about: the fatal/warning distinction, the "uses plaintext" message, and the fact that the password was redacted. - we'll use the /auth/ path for the repo, which shows that we are indeed making use of the auth information when needed. - we'll also use /smart/; most of these tests could be done via /dumb/ in t5550, but setting up pushes there requires extra effort and dependencies. The smart protocol is what most everyone is using these days anyway. This patch is my own, but I stole the analysis and a few bits of the commit message from a patch by Johannes Schindelin. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 03:26:42 +01:00
# here user%40host is the URL-encoded version of user@host,
# which is our intentionally-odd username to catch parsing errors
url_user=$HTTPD_URL_USER/auth/smart/repo.git
url_userpass=$HTTPD_URL_USER_PASS/auth/smart/repo.git
url_userblank=$HTTPD_PROTO://user%40host:@$HTTPD_DEST/auth/smart/repo.git
message="URL .*:<redacted>@.* uses plaintext credentials"
test_expect_success 'clone warns or fails when using username:password' '
test_when_finished "rm -rf attempt*" &&
git -c transfer.credentialsInUrl=allow \
clone $url_userpass attempt1 2>err &&
! grep "$message" err &&
git -c transfer.credentialsInUrl=warn \
clone $url_userpass attempt2 2>err &&
grep "warning: $message" err >warnings &&
test_line_count -ge 1 warnings &&
t5516: move plaintext-password tests from t5601 and t5516 Commit 6dcbdc0d66 (remote: create fetch.credentialsInUrl config, 2022-06-06) added tests for our handling of passwords in URLs. Since the obvious URL to be affected is git-over-http, the tests use http. However they don't set up a test server; they just try to access https://localhost, assuming it will fail (because the nothing is listening there). This causes some possible problems: - There might be a web server running on localhost, and we do not actually want to connect to that. - The DNS resolver, or the local firewall, might take a substantial amount of time (or forever, whichever comes first) to fail to connect, slowing down the tests cases unnecessarily. - Since there's no server, our tests for "allow" and "warn" still expect the clone/fetch/push operations to fail, even though in the real world we'd expect these to succeed. We scrape stderr to see what happened, but it's not as robust as a more realistic test. Let's instead move these to t5551, which is all about testing http and where we have a real server. That eliminates any issues with contacting a strange URL, and lets the "allow" and "warn" tests confirm that the operation actually succeeds. It's not quite a verbatim move for a few reasons: - we can drop the LIBCURL dependency; it's already part of lib-httpd.sh - we'll use HTTPD_URL_USER_PASS, etc, instead of our fake URL. To avoid repetition, we'll add a few extra variables. - the "https://username:@localhost" test uses a funny URL that lib-httpd.sh doesn't provide. We'll similarly construct it in a variable. Note that we're hard-coding the lib-httpd username here, but t5551 already does that everywhere. - for the "domain:port" test, the URL provided by lib-httpd is fine, since our test server will always be on an exotic port. But we'll confirm in the test that this is so. - since our message-matching is done via grep, I simplified it to use a regex, rather than trying to massage lib-httpd's variables. Arguably this makes it more readable, too, while retaining the bits we care about: the fatal/warning distinction, the "uses plaintext" message, and the fact that the password was redacted. - we'll use the /auth/ path for the repo, which shows that we are indeed making use of the auth information when needed. - we'll also use /smart/; most of these tests could be done via /dumb/ in t5550, but setting up pushes there requires extra effort and dependencies. The smart protocol is what most everyone is using these days anyway. This patch is my own, but I stole the analysis and a few bits of the commit message from a patch by Johannes Schindelin. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 03:26:42 +01:00
test_must_fail git -c transfer.credentialsInUrl=die \
clone $url_userpass attempt3 2>err &&
grep "fatal: $message" err >warnings &&
test_line_count -ge 1 warnings &&
t5516: move plaintext-password tests from t5601 and t5516 Commit 6dcbdc0d66 (remote: create fetch.credentialsInUrl config, 2022-06-06) added tests for our handling of passwords in URLs. Since the obvious URL to be affected is git-over-http, the tests use http. However they don't set up a test server; they just try to access https://localhost, assuming it will fail (because the nothing is listening there). This causes some possible problems: - There might be a web server running on localhost, and we do not actually want to connect to that. - The DNS resolver, or the local firewall, might take a substantial amount of time (or forever, whichever comes first) to fail to connect, slowing down the tests cases unnecessarily. - Since there's no server, our tests for "allow" and "warn" still expect the clone/fetch/push operations to fail, even though in the real world we'd expect these to succeed. We scrape stderr to see what happened, but it's not as robust as a more realistic test. Let's instead move these to t5551, which is all about testing http and where we have a real server. That eliminates any issues with contacting a strange URL, and lets the "allow" and "warn" tests confirm that the operation actually succeeds. It's not quite a verbatim move for a few reasons: - we can drop the LIBCURL dependency; it's already part of lib-httpd.sh - we'll use HTTPD_URL_USER_PASS, etc, instead of our fake URL. To avoid repetition, we'll add a few extra variables. - the "https://username:@localhost" test uses a funny URL that lib-httpd.sh doesn't provide. We'll similarly construct it in a variable. Note that we're hard-coding the lib-httpd username here, but t5551 already does that everywhere. - for the "domain:port" test, the URL provided by lib-httpd is fine, since our test server will always be on an exotic port. But we'll confirm in the test that this is so. - since our message-matching is done via grep, I simplified it to use a regex, rather than trying to massage lib-httpd's variables. Arguably this makes it more readable, too, while retaining the bits we care about: the fatal/warning distinction, the "uses plaintext" message, and the fact that the password was redacted. - we'll use the /auth/ path for the repo, which shows that we are indeed making use of the auth information when needed. - we'll also use /smart/; most of these tests could be done via /dumb/ in t5550, but setting up pushes there requires extra effort and dependencies. The smart protocol is what most everyone is using these days anyway. This patch is my own, but I stole the analysis and a few bits of the commit message from a patch by Johannes Schindelin. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 03:26:42 +01:00
test_must_fail git -c transfer.credentialsInUrl=die \
clone $url_userblank attempt4 2>err &&
grep "fatal: $message" err >warnings &&
test_line_count -ge 1 warnings
t5516: move plaintext-password tests from t5601 and t5516 Commit 6dcbdc0d66 (remote: create fetch.credentialsInUrl config, 2022-06-06) added tests for our handling of passwords in URLs. Since the obvious URL to be affected is git-over-http, the tests use http. However they don't set up a test server; they just try to access https://localhost, assuming it will fail (because the nothing is listening there). This causes some possible problems: - There might be a web server running on localhost, and we do not actually want to connect to that. - The DNS resolver, or the local firewall, might take a substantial amount of time (or forever, whichever comes first) to fail to connect, slowing down the tests cases unnecessarily. - Since there's no server, our tests for "allow" and "warn" still expect the clone/fetch/push operations to fail, even though in the real world we'd expect these to succeed. We scrape stderr to see what happened, but it's not as robust as a more realistic test. Let's instead move these to t5551, which is all about testing http and where we have a real server. That eliminates any issues with contacting a strange URL, and lets the "allow" and "warn" tests confirm that the operation actually succeeds. It's not quite a verbatim move for a few reasons: - we can drop the LIBCURL dependency; it's already part of lib-httpd.sh - we'll use HTTPD_URL_USER_PASS, etc, instead of our fake URL. To avoid repetition, we'll add a few extra variables. - the "https://username:@localhost" test uses a funny URL that lib-httpd.sh doesn't provide. We'll similarly construct it in a variable. Note that we're hard-coding the lib-httpd username here, but t5551 already does that everywhere. - for the "domain:port" test, the URL provided by lib-httpd is fine, since our test server will always be on an exotic port. But we'll confirm in the test that this is so. - since our message-matching is done via grep, I simplified it to use a regex, rather than trying to massage lib-httpd's variables. Arguably this makes it more readable, too, while retaining the bits we care about: the fatal/warning distinction, the "uses plaintext" message, and the fact that the password was redacted. - we'll use the /auth/ path for the repo, which shows that we are indeed making use of the auth information when needed. - we'll also use /smart/; most of these tests could be done via /dumb/ in t5550, but setting up pushes there requires extra effort and dependencies. The smart protocol is what most everyone is using these days anyway. This patch is my own, but I stole the analysis and a few bits of the commit message from a patch by Johannes Schindelin. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 03:26:42 +01:00
'
test_expect_success 'clone does not detect username:password when it is https://username@domain:port/' '
test_when_finished "rm -rf attempt1" &&
# we are relying on lib-httpd for url construction, so document our
# assumptions
case "$HTTPD_URL_USER" in
*:[0-9]*) : ok ;;
*) BUG "httpd url does not have port: $HTTPD_URL_USER"
esac &&
git -c transfer.credentialsInUrl=warn clone $url_user attempt1 2>err &&
! grep "uses plaintext credentials" err
'
test_expect_success 'fetch warns or fails when using username:password' '
git -c transfer.credentialsInUrl=allow fetch $url_userpass 2>err &&
! grep "$message" err &&
git -c transfer.credentialsInUrl=warn fetch $url_userpass 2>err &&
grep "warning: $message" err >warnings &&
test_line_count -ge 1 warnings &&
t5516: move plaintext-password tests from t5601 and t5516 Commit 6dcbdc0d66 (remote: create fetch.credentialsInUrl config, 2022-06-06) added tests for our handling of passwords in URLs. Since the obvious URL to be affected is git-over-http, the tests use http. However they don't set up a test server; they just try to access https://localhost, assuming it will fail (because the nothing is listening there). This causes some possible problems: - There might be a web server running on localhost, and we do not actually want to connect to that. - The DNS resolver, or the local firewall, might take a substantial amount of time (or forever, whichever comes first) to fail to connect, slowing down the tests cases unnecessarily. - Since there's no server, our tests for "allow" and "warn" still expect the clone/fetch/push operations to fail, even though in the real world we'd expect these to succeed. We scrape stderr to see what happened, but it's not as robust as a more realistic test. Let's instead move these to t5551, which is all about testing http and where we have a real server. That eliminates any issues with contacting a strange URL, and lets the "allow" and "warn" tests confirm that the operation actually succeeds. It's not quite a verbatim move for a few reasons: - we can drop the LIBCURL dependency; it's already part of lib-httpd.sh - we'll use HTTPD_URL_USER_PASS, etc, instead of our fake URL. To avoid repetition, we'll add a few extra variables. - the "https://username:@localhost" test uses a funny URL that lib-httpd.sh doesn't provide. We'll similarly construct it in a variable. Note that we're hard-coding the lib-httpd username here, but t5551 already does that everywhere. - for the "domain:port" test, the URL provided by lib-httpd is fine, since our test server will always be on an exotic port. But we'll confirm in the test that this is so. - since our message-matching is done via grep, I simplified it to use a regex, rather than trying to massage lib-httpd's variables. Arguably this makes it more readable, too, while retaining the bits we care about: the fatal/warning distinction, the "uses plaintext" message, and the fact that the password was redacted. - we'll use the /auth/ path for the repo, which shows that we are indeed making use of the auth information when needed. - we'll also use /smart/; most of these tests could be done via /dumb/ in t5550, but setting up pushes there requires extra effort and dependencies. The smart protocol is what most everyone is using these days anyway. This patch is my own, but I stole the analysis and a few bits of the commit message from a patch by Johannes Schindelin. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 03:26:42 +01:00
test_must_fail git -c transfer.credentialsInUrl=die \
fetch $url_userpass 2>err &&
grep "fatal: $message" err >warnings &&
test_line_count -ge 1 warnings &&
t5516: move plaintext-password tests from t5601 and t5516 Commit 6dcbdc0d66 (remote: create fetch.credentialsInUrl config, 2022-06-06) added tests for our handling of passwords in URLs. Since the obvious URL to be affected is git-over-http, the tests use http. However they don't set up a test server; they just try to access https://localhost, assuming it will fail (because the nothing is listening there). This causes some possible problems: - There might be a web server running on localhost, and we do not actually want to connect to that. - The DNS resolver, or the local firewall, might take a substantial amount of time (or forever, whichever comes first) to fail to connect, slowing down the tests cases unnecessarily. - Since there's no server, our tests for "allow" and "warn" still expect the clone/fetch/push operations to fail, even though in the real world we'd expect these to succeed. We scrape stderr to see what happened, but it's not as robust as a more realistic test. Let's instead move these to t5551, which is all about testing http and where we have a real server. That eliminates any issues with contacting a strange URL, and lets the "allow" and "warn" tests confirm that the operation actually succeeds. It's not quite a verbatim move for a few reasons: - we can drop the LIBCURL dependency; it's already part of lib-httpd.sh - we'll use HTTPD_URL_USER_PASS, etc, instead of our fake URL. To avoid repetition, we'll add a few extra variables. - the "https://username:@localhost" test uses a funny URL that lib-httpd.sh doesn't provide. We'll similarly construct it in a variable. Note that we're hard-coding the lib-httpd username here, but t5551 already does that everywhere. - for the "domain:port" test, the URL provided by lib-httpd is fine, since our test server will always be on an exotic port. But we'll confirm in the test that this is so. - since our message-matching is done via grep, I simplified it to use a regex, rather than trying to massage lib-httpd's variables. Arguably this makes it more readable, too, while retaining the bits we care about: the fatal/warning distinction, the "uses plaintext" message, and the fact that the password was redacted. - we'll use the /auth/ path for the repo, which shows that we are indeed making use of the auth information when needed. - we'll also use /smart/; most of these tests could be done via /dumb/ in t5550, but setting up pushes there requires extra effort and dependencies. The smart protocol is what most everyone is using these days anyway. This patch is my own, but I stole the analysis and a few bits of the commit message from a patch by Johannes Schindelin. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 03:26:42 +01:00
test_must_fail git -c transfer.credentialsInUrl=die \
fetch $url_userblank 2>err &&
grep "fatal: $message" err >warnings &&
test_line_count -ge 1 warnings
t5516: move plaintext-password tests from t5601 and t5516 Commit 6dcbdc0d66 (remote: create fetch.credentialsInUrl config, 2022-06-06) added tests for our handling of passwords in URLs. Since the obvious URL to be affected is git-over-http, the tests use http. However they don't set up a test server; they just try to access https://localhost, assuming it will fail (because the nothing is listening there). This causes some possible problems: - There might be a web server running on localhost, and we do not actually want to connect to that. - The DNS resolver, or the local firewall, might take a substantial amount of time (or forever, whichever comes first) to fail to connect, slowing down the tests cases unnecessarily. - Since there's no server, our tests for "allow" and "warn" still expect the clone/fetch/push operations to fail, even though in the real world we'd expect these to succeed. We scrape stderr to see what happened, but it's not as robust as a more realistic test. Let's instead move these to t5551, which is all about testing http and where we have a real server. That eliminates any issues with contacting a strange URL, and lets the "allow" and "warn" tests confirm that the operation actually succeeds. It's not quite a verbatim move for a few reasons: - we can drop the LIBCURL dependency; it's already part of lib-httpd.sh - we'll use HTTPD_URL_USER_PASS, etc, instead of our fake URL. To avoid repetition, we'll add a few extra variables. - the "https://username:@localhost" test uses a funny URL that lib-httpd.sh doesn't provide. We'll similarly construct it in a variable. Note that we're hard-coding the lib-httpd username here, but t5551 already does that everywhere. - for the "domain:port" test, the URL provided by lib-httpd is fine, since our test server will always be on an exotic port. But we'll confirm in the test that this is so. - since our message-matching is done via grep, I simplified it to use a regex, rather than trying to massage lib-httpd's variables. Arguably this makes it more readable, too, while retaining the bits we care about: the fatal/warning distinction, the "uses plaintext" message, and the fact that the password was redacted. - we'll use the /auth/ path for the repo, which shows that we are indeed making use of the auth information when needed. - we'll also use /smart/; most of these tests could be done via /dumb/ in t5550, but setting up pushes there requires extra effort and dependencies. The smart protocol is what most everyone is using these days anyway. This patch is my own, but I stole the analysis and a few bits of the commit message from a patch by Johannes Schindelin. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 03:26:42 +01:00
'
test_expect_success 'push warns or fails when using username:password' '
git -c transfer.credentialsInUrl=allow push $url_userpass 2>err &&
! grep "$message" err &&
git -c transfer.credentialsInUrl=warn push $url_userpass 2>err &&
grep "warning: $message" err >warnings &&
test_must_fail git -c transfer.credentialsInUrl=die \
push $url_userpass 2>err &&
grep "fatal: $message" err >warnings &&
test_line_count -ge 1 warnings
t5516: move plaintext-password tests from t5601 and t5516 Commit 6dcbdc0d66 (remote: create fetch.credentialsInUrl config, 2022-06-06) added tests for our handling of passwords in URLs. Since the obvious URL to be affected is git-over-http, the tests use http. However they don't set up a test server; they just try to access https://localhost, assuming it will fail (because the nothing is listening there). This causes some possible problems: - There might be a web server running on localhost, and we do not actually want to connect to that. - The DNS resolver, or the local firewall, might take a substantial amount of time (or forever, whichever comes first) to fail to connect, slowing down the tests cases unnecessarily. - Since there's no server, our tests for "allow" and "warn" still expect the clone/fetch/push operations to fail, even though in the real world we'd expect these to succeed. We scrape stderr to see what happened, but it's not as robust as a more realistic test. Let's instead move these to t5551, which is all about testing http and where we have a real server. That eliminates any issues with contacting a strange URL, and lets the "allow" and "warn" tests confirm that the operation actually succeeds. It's not quite a verbatim move for a few reasons: - we can drop the LIBCURL dependency; it's already part of lib-httpd.sh - we'll use HTTPD_URL_USER_PASS, etc, instead of our fake URL. To avoid repetition, we'll add a few extra variables. - the "https://username:@localhost" test uses a funny URL that lib-httpd.sh doesn't provide. We'll similarly construct it in a variable. Note that we're hard-coding the lib-httpd username here, but t5551 already does that everywhere. - for the "domain:port" test, the URL provided by lib-httpd is fine, since our test server will always be on an exotic port. But we'll confirm in the test that this is so. - since our message-matching is done via grep, I simplified it to use a regex, rather than trying to massage lib-httpd's variables. Arguably this makes it more readable, too, while retaining the bits we care about: the fatal/warning distinction, the "uses plaintext" message, and the fact that the password was redacted. - we'll use the /auth/ path for the repo, which shows that we are indeed making use of the auth information when needed. - we'll also use /smart/; most of these tests could be done via /dumb/ in t5550, but setting up pushes there requires extra effort and dependencies. The smart protocol is what most everyone is using these days anyway. This patch is my own, but I stole the analysis and a few bits of the commit message from a patch by Johannes Schindelin. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-01 03:26:42 +01:00
'
test_expect_success 'no empty path components' '
# In the URL, add a trailing slash, and see if git appends yet another
# slash.
git clone $HTTPD_URL/smart/repo.git/ clone-with-slash &&
strip_access_log >log &&
! grep "//" log
'
test smart http fetch and push The top level directory "/smart/" of the test Apache server is mapped through our git-http-backend CGI, but uses the same underlying repository space as the server's document root. This is the most simple installation possible. Server logs are checked to verify the client has accessed only the smart URLs during the test. During fetch testing the headers are also logged from libcurl to ensure we are making a reasonably sane HTTP request, and getting back reasonably sane response headers from the CGI. When validating the request headers used during smart fetch we munge away the actual Content-Length and replace it with the placeholder "xxx". This avoids unnecessary varability in the test caused by an unrelated change in the requested capabilities in the first want line of the request. However, we still want to look for and verify that Content-Length was used, because smaller payloads should be using Content-Length and not "Transfer-Encoding: chunked". When validating the server response headers we must discard both Content-Length and Transfer-Encoding, as Apache2 can use either format to return our response. During development of this test I observed Apache returning both forms, depending on when the processes got CPU time. If our CGI returned the pack data quickly, Apache just buffered the whole thing and returned a Content-Length. If our CGI took just a bit too long to complete, Apache flushed its buffer and instead used "Transfer-Encoding: chunked". Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-31 01:47:47 +01:00
test_done