git-commit-vandalism/t/t5550-http-fetch-dumb.sh

392 lines
13 KiB
Bash
Raw Normal View History

#!/bin/sh
test smart http fetch and push The top level directory "/smart/" of the test Apache server is mapped through our git-http-backend CGI, but uses the same underlying repository space as the server's document root. This is the most simple installation possible. Server logs are checked to verify the client has accessed only the smart URLs during the test. During fetch testing the headers are also logged from libcurl to ensure we are making a reasonably sane HTTP request, and getting back reasonably sane response headers from the CGI. When validating the request headers used during smart fetch we munge away the actual Content-Length and replace it with the placeholder "xxx". This avoids unnecessary varability in the test caused by an unrelated change in the requested capabilities in the first want line of the request. However, we still want to look for and verify that Content-Length was used, because smaller payloads should be using Content-Length and not "Transfer-Encoding: chunked". When validating the server response headers we must discard both Content-Length and Transfer-Encoding, as Apache2 can use either format to return our response. During development of this test I observed Apache returning both forms, depending on when the processes got CPU time. If our CGI returned the pack data quickly, Apache just buffered the whole thing and returned a Content-Length. If our CGI took just a bit too long to complete, Apache flushed its buffer and instead used "Transfer-Encoding: chunked". Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-31 01:47:47 +01:00
test_description='test dumb fetching over http via static file'
. ./test-lib.sh
. "$TEST_DIRECTORY"/lib-httpd.sh
start_httpd
test_expect_success 'setup repository' '
git config push.default matching &&
echo content1 >file &&
git add file &&
git commit -m one &&
echo content2 >file &&
git add file &&
git commit -m two
'
test_expect_success 'create http-accessible bare repository with loose objects' '
cp -R .git "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
(cd "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
git config core.bare true &&
mkdir -p hooks &&
echo "exec git update-server-info" >hooks/post-update &&
chmod +x hooks/post-update &&
hooks/post-update
) &&
git remote add public "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" &&
git push public master:master
'
test_expect_success 'clone http repository' '
git clone $HTTPD_URL/dumb/repo.git clone-tmpl &&
cp -R clone-tmpl clone &&
test_cmp file clone/file
'
remote helpers: avoid blind fall-back to ".git" when setting GIT_DIR To push from or fetch to the current repository, remote helpers need to know what repository that is. Accordingly, Git sets the GIT_DIR environment variable to the path to the current repository when invoking remote helpers. There is a special case it does not handle: "git ls-remote" and "git archive --remote" can be run to inspect a remote repository without being run from any local repository. GIT_DIR is not useful in this scenario: - if we are not in a repository, we don't need to set GIT_DIR to override an existing GIT_DIR value from the environment. If GIT_DIR is present then we would be in a repository if it were valid and would have called die() if it weren't. - not setting GIT_DIR may cause a helper to do the usual discovery walk to find the repository. But we know we're not in one, or we would have found it ourselves. So in the worst case it may expend a little extra effort to try to find a repository and fail (for example, remote-curl would do this to try to find repository-level configuration). So leave GIT_DIR unset in this case. This makes GIT_DIR easier to understand for remote helper authors and makes transport code less of a special case for repository discovery. Noticed using b1ef400e (setup_git_env: avoid blind fall-back to ".git", 2016-10-20) from 'next': $ cd /tmp $ git ls-remote https://kernel.googlesource.com/pub/scm/git/git fatal: BUG: setup_git_env called without repository Helped-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-14 21:36:19 +01:00
test_expect_success 'list refs from outside any repository' '
cat >expect <<-EOF &&
$(git rev-parse master) HEAD
$(git rev-parse master) refs/heads/master
EOF
nongit git ls-remote "$HTTPD_URL/dumb/repo.git" >actual &&
test_cmp expect actual
'
test_expect_success 'create password-protected repository' '
mkdir -p "$HTTPD_DOCUMENT_ROOT_PATH/auth/dumb/" &&
cp -Rf "$HTTPD_DOCUMENT_ROOT_PATH/repo.git" \
"$HTTPD_DOCUMENT_ROOT_PATH/auth/dumb/repo.git"
'
setup_askpass_helper
http: use credential API to get passwords This patch converts the http code to use the new credential API, both for http authentication as well as for getting certificate passwords. Most of the code change is simply variable naming (the passwords are now contained inside the credential struct) or deletion of obsolete code (the credential code handles URL parsing and prompting for us). The behavior should be the same, with one exception: the credential code will prompt with a description based on the credential components. Therefore, the old prompt of: Username for 'example.com': Password for 'example.com': now looks like: Username for 'https://example.com/repo.git': Password for 'https://user@example.com/repo.git': Note that we include more information in each line, specifically: 1. We now include the protocol. While more noisy, this is an important part of knowing what you are accessing (especially if you care about http vs https). 2. We include the username in the password prompt. This is not a big deal when you have just been prompted for it, but the username may also come from the remote's URL (and after future patches, from configuration or credential helpers). In that case, it's a nice reminder of the user for which you're giving the password. 3. We include the path component of the URL. In many cases, the user won't care about this and it's simply noise (i.e., they'll use the same credential for a whole site). However, that is part of a larger question, which is whether path components should be part of credential context, both for prompting and for lookup by storage helpers. That issue will be addressed as a whole in a future patch. Similarly, for unlocking certificates, we used to say: Certificate Password for 'example.com': and we now say: Password for 'cert:///path/to/certificate': Showing the path to the client certificate makes more sense, as that is what you are unlocking, not "example.com". Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-12-10 11:31:21 +01:00
test_expect_success 'cloning password-protected repository can fail' '
set_askpass wrong &&
test_must_fail git clone "$HTTPD_URL/auth/dumb/repo.git" clone-auth-fail &&
http: use credential API to get passwords This patch converts the http code to use the new credential API, both for http authentication as well as for getting certificate passwords. Most of the code change is simply variable naming (the passwords are now contained inside the credential struct) or deletion of obsolete code (the credential code handles URL parsing and prompting for us). The behavior should be the same, with one exception: the credential code will prompt with a description based on the credential components. Therefore, the old prompt of: Username for 'example.com': Password for 'example.com': now looks like: Username for 'https://example.com/repo.git': Password for 'https://user@example.com/repo.git': Note that we include more information in each line, specifically: 1. We now include the protocol. While more noisy, this is an important part of knowing what you are accessing (especially if you care about http vs https). 2. We include the username in the password prompt. This is not a big deal when you have just been prompted for it, but the username may also come from the remote's URL (and after future patches, from configuration or credential helpers). In that case, it's a nice reminder of the user for which you're giving the password. 3. We include the path component of the URL. In many cases, the user won't care about this and it's simply noise (i.e., they'll use the same credential for a whole site). However, that is part of a larger question, which is whether path components should be part of credential context, both for prompting and for lookup by storage helpers. That issue will be addressed as a whole in a future patch. Similarly, for unlocking certificates, we used to say: Certificate Password for 'example.com': and we now say: Password for 'cert:///path/to/certificate': Showing the path to the client certificate makes more sense, as that is what you are unlocking, not "example.com". Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-12-10 11:31:21 +01:00
expect_askpass both wrong
'
test_expect_success 'http auth can use user/pass in URL' '
set_askpass wrong &&
git clone "$HTTPD_URL_USER_PASS/auth/dumb/repo.git" clone-auth-none &&
http: use credential API to get passwords This patch converts the http code to use the new credential API, both for http authentication as well as for getting certificate passwords. Most of the code change is simply variable naming (the passwords are now contained inside the credential struct) or deletion of obsolete code (the credential code handles URL parsing and prompting for us). The behavior should be the same, with one exception: the credential code will prompt with a description based on the credential components. Therefore, the old prompt of: Username for 'example.com': Password for 'example.com': now looks like: Username for 'https://example.com/repo.git': Password for 'https://user@example.com/repo.git': Note that we include more information in each line, specifically: 1. We now include the protocol. While more noisy, this is an important part of knowing what you are accessing (especially if you care about http vs https). 2. We include the username in the password prompt. This is not a big deal when you have just been prompted for it, but the username may also come from the remote's URL (and after future patches, from configuration or credential helpers). In that case, it's a nice reminder of the user for which you're giving the password. 3. We include the path component of the URL. In many cases, the user won't care about this and it's simply noise (i.e., they'll use the same credential for a whole site). However, that is part of a larger question, which is whether path components should be part of credential context, both for prompting and for lookup by storage helpers. That issue will be addressed as a whole in a future patch. Similarly, for unlocking certificates, we used to say: Certificate Password for 'example.com': and we now say: Password for 'cert:///path/to/certificate': Showing the path to the client certificate makes more sense, as that is what you are unlocking, not "example.com". Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-12-10 11:31:21 +01:00
expect_askpass none
'
test_expect_success 'http auth can use just user in URL' '
set_askpass wrong pass@host &&
git clone "$HTTPD_URL_USER/auth/dumb/repo.git" clone-auth-pass &&
http: use credential API to get passwords This patch converts the http code to use the new credential API, both for http authentication as well as for getting certificate passwords. Most of the code change is simply variable naming (the passwords are now contained inside the credential struct) or deletion of obsolete code (the credential code handles URL parsing and prompting for us). The behavior should be the same, with one exception: the credential code will prompt with a description based on the credential components. Therefore, the old prompt of: Username for 'example.com': Password for 'example.com': now looks like: Username for 'https://example.com/repo.git': Password for 'https://user@example.com/repo.git': Note that we include more information in each line, specifically: 1. We now include the protocol. While more noisy, this is an important part of knowing what you are accessing (especially if you care about http vs https). 2. We include the username in the password prompt. This is not a big deal when you have just been prompted for it, but the username may also come from the remote's URL (and after future patches, from configuration or credential helpers). In that case, it's a nice reminder of the user for which you're giving the password. 3. We include the path component of the URL. In many cases, the user won't care about this and it's simply noise (i.e., they'll use the same credential for a whole site). However, that is part of a larger question, which is whether path components should be part of credential context, both for prompting and for lookup by storage helpers. That issue will be addressed as a whole in a future patch. Similarly, for unlocking certificates, we used to say: Certificate Password for 'example.com': and we now say: Password for 'cert:///path/to/certificate': Showing the path to the client certificate makes more sense, as that is what you are unlocking, not "example.com". Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-12-10 11:31:21 +01:00
expect_askpass pass user@host
'
test_expect_success 'http auth can request both user and pass' '
set_askpass user@host pass@host &&
git clone "$HTTPD_URL/auth/dumb/repo.git" clone-auth-both &&
http: use credential API to get passwords This patch converts the http code to use the new credential API, both for http authentication as well as for getting certificate passwords. Most of the code change is simply variable naming (the passwords are now contained inside the credential struct) or deletion of obsolete code (the credential code handles URL parsing and prompting for us). The behavior should be the same, with one exception: the credential code will prompt with a description based on the credential components. Therefore, the old prompt of: Username for 'example.com': Password for 'example.com': now looks like: Username for 'https://example.com/repo.git': Password for 'https://user@example.com/repo.git': Note that we include more information in each line, specifically: 1. We now include the protocol. While more noisy, this is an important part of knowing what you are accessing (especially if you care about http vs https). 2. We include the username in the password prompt. This is not a big deal when you have just been prompted for it, but the username may also come from the remote's URL (and after future patches, from configuration or credential helpers). In that case, it's a nice reminder of the user for which you're giving the password. 3. We include the path component of the URL. In many cases, the user won't care about this and it's simply noise (i.e., they'll use the same credential for a whole site). However, that is part of a larger question, which is whether path components should be part of credential context, both for prompting and for lookup by storage helpers. That issue will be addressed as a whole in a future patch. Similarly, for unlocking certificates, we used to say: Certificate Password for 'example.com': and we now say: Password for 'cert:///path/to/certificate': Showing the path to the client certificate makes more sense, as that is what you are unlocking, not "example.com". Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-12-10 11:31:21 +01:00
expect_askpass both user@host
'
test_expect_success 'http auth respects credential helper config' '
test_config_global credential.helper "!f() {
cat >/dev/null
echo username=user@host
echo password=pass@host
}; f" &&
set_askpass wrong &&
git clone "$HTTPD_URL/auth/dumb/repo.git" clone-auth-helper &&
expect_askpass none
'
test_expect_success 'http auth can get username from config' '
test_config_global "credential.$HTTPD_URL.username" user@host &&
set_askpass wrong pass@host &&
git clone "$HTTPD_URL/auth/dumb/repo.git" clone-auth-user &&
expect_askpass pass user@host
'
test_expect_success 'configured username does not override URL' '
test_config_global "credential.$HTTPD_URL.username" wrong &&
set_askpass wrong pass@host &&
git clone "$HTTPD_URL_USER/auth/dumb/repo.git" clone-auth-user2 &&
expect_askpass pass user@host
'
test_expect_success 'set up repo with http submodules' '
git init super &&
set_askpass user@host pass@host &&
(
cd super &&
git submodule add "$HTTPD_URL/auth/dumb/repo.git" sub &&
git commit -m "add submodule"
)
'
test_expect_success 'cmdline credential config passes to submodule via clone' '
set_askpass wrong pass@host &&
test_must_fail git clone --recursive super super-clone &&
rm -rf super-clone &&
set_askpass wrong pass@host &&
git -c "credential.$HTTPD_URL.username=user@host" \
clone --recursive super super-clone &&
expect_askpass pass user@host
'
test_expect_success 'cmdline credential config passes submodule via fetch' '
set_askpass wrong pass@host &&
test_must_fail git -C super-clone fetch --recurse-submodules &&
set_askpass wrong pass@host &&
git -C super-clone \
-c "credential.$HTTPD_URL.username=user@host" \
fetch --recurse-submodules &&
expect_askpass pass user@host
'
test_expect_success 'cmdline credential config passes submodule update' '
# advance the submodule HEAD so that a fetch is required
git commit --allow-empty -m foo &&
git push "$HTTPD_DOCUMENT_ROOT_PATH/auth/dumb/repo.git" HEAD &&
sha1=$(git rev-parse HEAD) &&
git -C super-clone update-index --cacheinfo 160000,$sha1,sub &&
set_askpass wrong pass@host &&
test_must_fail git -C super-clone submodule update &&
set_askpass wrong pass@host &&
git -C super-clone \
-c "credential.$HTTPD_URL.username=user@host" \
submodule update &&
expect_askpass pass user@host
'
test_expect_success 'fetch changes via http' '
echo content >>file &&
git commit -a -m two &&
git push public &&
(cd clone && git pull) &&
test_cmp file clone/file
'
test_expect_success 'fetch changes via manual http-fetch' '
cp -R clone-tmpl clone2 &&
HEAD=$(git rev-parse --verify HEAD) &&
(cd clone2 &&
git http-fetch -a -w heads/master-new $HEAD $(git config remote.origin.url) &&
git checkout master-new &&
test $HEAD = $(git rev-parse --verify HEAD)) &&
test_cmp file clone2/file
'
test_expect_success 'http remote detects correct HEAD' '
git push public master:other &&
(cd clone &&
git remote set-head origin -d &&
git remote set-head origin -a &&
git symbolic-ref refs/remotes/origin/HEAD > output &&
echo refs/remotes/origin/master > expect &&
test_cmp expect output
)
'
test_expect_success 'fetch packed objects' '
cp -R "$HTTPD_DOCUMENT_ROOT_PATH"/repo.git "$HTTPD_DOCUMENT_ROOT_PATH"/repo_pack.git &&
(cd "$HTTPD_DOCUMENT_ROOT_PATH"/repo_pack.git &&
git --bare repack -a -d
) &&
git clone $HTTPD_URL/dumb/repo_pack.git
'
test_expect_success 'fetch notices corrupt pack' '
cp -R "$HTTPD_DOCUMENT_ROOT_PATH"/repo_pack.git "$HTTPD_DOCUMENT_ROOT_PATH"/repo_bad1.git &&
(cd "$HTTPD_DOCUMENT_ROOT_PATH"/repo_bad1.git &&
p=$(ls objects/pack/pack-*.pack) &&
chmod u+w $p &&
printf %0256d 0 | dd of=$p bs=256 count=1 seek=1 conv=notrunc
) &&
mkdir repo_bad1.git &&
(cd repo_bad1.git &&
git --bare init &&
test_must_fail git --bare fetch $HTTPD_URL/dumb/repo_bad1.git &&
test 0 = $(ls objects/pack/pack-*.pack | wc -l)
)
'
test_expect_success 'fetch notices corrupt idx' '
cp -R "$HTTPD_DOCUMENT_ROOT_PATH"/repo_pack.git "$HTTPD_DOCUMENT_ROOT_PATH"/repo_bad2.git &&
(cd "$HTTPD_DOCUMENT_ROOT_PATH"/repo_bad2.git &&
p=$(ls objects/pack/pack-*.idx) &&
chmod u+w $p &&
printf %0256d 0 | dd of=$p bs=256 count=1 seek=1 conv=notrunc
) &&
mkdir repo_bad2.git &&
(cd repo_bad2.git &&
git --bare init &&
test_must_fail git --bare fetch $HTTPD_URL/dumb/repo_bad2.git &&
test 0 = $(ls objects/pack | wc -l)
)
'
test_expect_success 'fetch can handle previously-fetched .idx files' '
git checkout --orphan branch1 &&
echo base >file &&
git add file &&
git commit -m base &&
git --bare init "$HTTPD_DOCUMENT_ROOT_PATH"/repo_packed_branches.git &&
git push "$HTTPD_DOCUMENT_ROOT_PATH"/repo_packed_branches.git branch1 &&
git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH"/repo_packed_branches.git repack -d &&
git checkout -b branch2 branch1 &&
echo b2 >>file &&
git commit -a -m b2 &&
git push "$HTTPD_DOCUMENT_ROOT_PATH"/repo_packed_branches.git branch2 &&
git --git-dir="$HTTPD_DOCUMENT_ROOT_PATH"/repo_packed_branches.git repack -d &&
git --bare init clone_packed_branches.git &&
git --git-dir=clone_packed_branches.git fetch "$HTTPD_URL"/dumb/repo_packed_branches.git branch1:branch1 &&
git --git-dir=clone_packed_branches.git fetch "$HTTPD_URL"/dumb/repo_packed_branches.git branch2:branch2
'
test smart http fetch and push The top level directory "/smart/" of the test Apache server is mapped through our git-http-backend CGI, but uses the same underlying repository space as the server's document root. This is the most simple installation possible. Server logs are checked to verify the client has accessed only the smart URLs during the test. During fetch testing the headers are also logged from libcurl to ensure we are making a reasonably sane HTTP request, and getting back reasonably sane response headers from the CGI. When validating the request headers used during smart fetch we munge away the actual Content-Length and replace it with the placeholder "xxx". This avoids unnecessary varability in the test caused by an unrelated change in the requested capabilities in the first want line of the request. However, we still want to look for and verify that Content-Length was used, because smaller payloads should be using Content-Length and not "Transfer-Encoding: chunked". When validating the server response headers we must discard both Content-Length and Transfer-Encoding, as Apache2 can use either format to return our response. During development of this test I observed Apache returning both forms, depending on when the processes got CPU time. If our CGI returned the pack data quickly, Apache just buffered the whole thing and returned a Content-Length. If our CGI took just a bit too long to complete, Apache flushed its buffer and instead used "Transfer-Encoding: chunked". Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-31 01:47:47 +01:00
test_expect_success 'did not use upload-pack service' '
test_might_fail grep '/git-upload-pack' <"$HTTPD_ROOT_PATH"/access.log >act &&
: >exp &&
test smart http fetch and push The top level directory "/smart/" of the test Apache server is mapped through our git-http-backend CGI, but uses the same underlying repository space as the server's document root. This is the most simple installation possible. Server logs are checked to verify the client has accessed only the smart URLs during the test. During fetch testing the headers are also logged from libcurl to ensure we are making a reasonably sane HTTP request, and getting back reasonably sane response headers from the CGI. When validating the request headers used during smart fetch we munge away the actual Content-Length and replace it with the placeholder "xxx". This avoids unnecessary varability in the test caused by an unrelated change in the requested capabilities in the first want line of the request. However, we still want to look for and verify that Content-Length was used, because smaller payloads should be using Content-Length and not "Transfer-Encoding: chunked". When validating the server response headers we must discard both Content-Length and Transfer-Encoding, as Apache2 can use either format to return our response. During development of this test I observed Apache returning both forms, depending on when the processes got CPU time. If our CGI returned the pack data quickly, Apache just buffered the whole thing and returned a Content-Length. If our CGI took just a bit too long to complete, Apache flushed its buffer and instead used "Transfer-Encoding: chunked". Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-31 01:47:47 +01:00
test_cmp exp act
'
test_expect_success 'git client shows text/plain errors' '
test_must_fail git clone "$HTTPD_URL/error/text" 2>stderr &&
grep "this is the error message" stderr
'
test_expect_success 'git client does not show html errors' '
test_must_fail git clone "$HTTPD_URL/error/html" 2>stderr &&
! grep "this is the error message" stderr
'
test_expect_success 'git client shows text/plain with a charset' '
test_must_fail git clone "$HTTPD_URL/error/charset" 2>stderr &&
grep "this is the error message" stderr
'
test_expect_success 'http error messages are reencoded' '
test_must_fail git clone "$HTTPD_URL/error/utf16" 2>stderr &&
grep "this is the error message" stderr
'
test_expect_success 'reencoding is robust to whitespace oddities' '
test_must_fail git clone "$HTTPD_URL/error/odd-spacing" 2>stderr &&
grep "this is the error message" stderr
'
check_language () {
case "$2" in
'')
>expect
;;
?*)
echo "=> Send header: Accept-Language: $1" >expect
;;
esac &&
GIT_TRACE_CURL=true \
LANGUAGE=$2 \
git ls-remote "$HTTPD_URL/dumb/repo.git" >output 2>&1 &&
tr -d '\015' <output |
sort -u |
sed -ne '/^=> Send header: Accept-Language:/ p' >actual &&
test_cmp expect actual
}
test_expect_success 'git client sends Accept-Language based on LANGUAGE' '
check_language "ko-KR, *;q=0.9" ko_KR.UTF-8'
test_expect_success 'git client sends Accept-Language correctly with unordinary LANGUAGE' '
check_language "ko-KR, *;q=0.9" "ko_KR:" &&
check_language "ko-KR, en-US;q=0.9, *;q=0.8" "ko_KR::en_US" &&
check_language "ko-KR, *;q=0.9" ":::ko_KR" &&
check_language "ko-KR, en-US;q=0.9, *;q=0.8" "ko_KR!!:en_US" &&
check_language "ko-KR, ja-JP;q=0.9, *;q=0.8" "ko_KR en_US:ja_JP"'
test_expect_success 'git client sends Accept-Language with many preferred languages' '
check_language "ko-KR, en-US;q=0.9, fr-CA;q=0.8, de;q=0.7, sr;q=0.6, \
ja;q=0.5, zh;q=0.4, sv;q=0.3, pt;q=0.2, *;q=0.1" \
ko_KR.EUC-KR:en_US.UTF-8:fr_CA:de.UTF-8@euro:sr@latin:ja:zh:sv:pt &&
check_language "ko-KR, en-US;q=0.99, fr-CA;q=0.98, de;q=0.97, sr;q=0.96, \
ja;q=0.95, zh;q=0.94, sv;q=0.93, pt;q=0.92, nb;q=0.91, *;q=0.90" \
ko_KR.EUC-KR:en_US.UTF-8:fr_CA:de.UTF-8@euro:sr@latin:ja:zh:sv:pt:nb
'
test_expect_success 'git client does not send an empty Accept-Language' '
GIT_TRACE_CURL=true LANGUAGE= git ls-remote "$HTTPD_URL/dumb/repo.git" 2>stderr &&
! grep "^=> Send header: Accept-Language:" stderr
'
test_expect_success 'remote-http complains cleanly about malformed urls' '
# do not actually issue "list" or other commands, as we do not
# want to rely on what curl would actually do with such a broken
# URL. This is just about making sure we do not segfault during
# initialization.
test_must_fail git remote-http http::/example.com/repo.git
'
http: make redirects more obvious We instruct curl to always follow HTTP redirects. This is convenient, but it creates opportunities for malicious servers to create confusing situations. For instance, imagine Alice is a git user with access to a private repository on Bob's server. Mallory runs her own server and wants to access objects from Bob's repository. Mallory may try a few tricks that involve asking Alice to clone from her, build on top, and then push the result: 1. Mallory may simply redirect all fetch requests to Bob's server. Git will transparently follow those redirects and fetch Bob's history, which Alice may believe she got from Mallory. The subsequent push seems like it is just feeding Mallory back her own objects, but is actually leaking Bob's objects. There is nothing in git's output to indicate that Bob's repository was involved at all. The downside (for Mallory) of this attack is that Alice will have received Bob's entire repository, and is likely to notice that when building on top of it. 2. If Mallory happens to know the sha1 of some object X in Bob's repository, she can instead build her own history that references that object. She then runs a dumb http server, and Alice's client will fetch each object individually. When it asks for X, Mallory redirects her to Bob's server. The end result is that Alice obtains objects from Bob, but they may be buried deep in history. Alice is less likely to notice. Both of these attacks are fairly hard to pull off. There's a social component in getting Mallory to convince Alice to work with her. Alice may be prompted for credentials in accessing Bob's repository (but not always, if she is using a credential helper that caches). Attack (1) requires a certain amount of obliviousness on Alice's part while making a new commit. Attack (2) requires that Mallory knows a sha1 in Bob's repository, that Bob's server supports dumb http, and that the object in question is loose on Bob's server. But we can probably make things a bit more obvious without any loss of functionality. This patch does two things to that end. First, when we encounter a whole-repo redirect during the initial ref discovery, we now inform the user on stderr, making attack (1) much more obvious. Second, the decision to follow redirects is now configurable. The truly paranoid can set the new http.followRedirects to false to avoid any redirection entirely. But for a more practical default, we will disallow redirects only after the initial ref discovery. This is enough to thwart attacks similar to (2), while still allowing the common use of redirects at the repository level. Since c93c92f30 (http: update base URLs when we see redirects, 2013-09-28) we re-root all further requests from the redirect destination, which should generally mean that no further redirection is necessary. As an escape hatch, in case there really is a server that needs to redirect individual requests, the user can set http.followRedirects to "true" (and this can be done on a per-server basis via http.*.followRedirects config). Reported-by: Jann Horn <jannh@google.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-06 19:24:41 +01:00
test_expect_success 'redirects can be forbidden/allowed' '
test_must_fail git -c http.followRedirects=false \
clone $HTTPD_URL/dumb-redir/repo.git dumb-redir &&
git -c http.followRedirects=true \
clone $HTTPD_URL/dumb-redir/repo.git dumb-redir 2>stderr
'
test_expect_success 'redirects are reported to stderr' '
# just look for a snippet of the redirected-to URL
test_i18ngrep /dumb/ stderr
'
test_expect_success 'non-initial redirects can be forbidden' '
test_must_fail git -c http.followRedirects=initial \
clone $HTTPD_URL/redir-objects/repo.git redir-objects &&
git -c http.followRedirects=true \
clone $HTTPD_URL/redir-objects/repo.git redir-objects
'
test_expect_success 'http.followRedirects defaults to "initial"' '
test_must_fail git clone $HTTPD_URL/redir-objects/repo.git default
'
http: treat http-alternates like redirects The previous commit made HTTP redirects more obvious and tightened up the default behavior. However, there's another way for a server to ask a git client to fetch arbitrary content: by having an http-alternates file (or a regular alternates file, which is used as a backup). Similar to the HTTP redirect case, a malicious server can claim to have refs pointing at object X, return a 404 when the client asks for X, but point to some other URL via http-alternates, which the client will transparently fetch. The end result is that it looks from the user's perspective like the objects came from the malicious server, as the other URL is not mentioned at all. Worse, because we feed the new URL to curl ourselves, the usual protocol restrictions do not kick in (neither curl's default of disallowing file://, nor the protocol whitelisting in f4113cac0 (http: limit redirection to protocol-whitelist, 2015-09-22). Let's apply the same rules here as we do for HTTP redirects. Namely: - unless http.followRedirects is set to "always", we will not follow remote redirects from http-alternates (or alternates) at all - set CURLOPT_PROTOCOLS alongside CURLOPT_REDIR_PROTOCOLS restrict ourselves to a known-safe set and respect any user-provided whitelist. - mention alternate object stores on stderr so that the user is aware another source of objects may be involved The first item may prove to be too restrictive. The most common use of alternates is to point to another path on the same server. While it's possible for a single-server redirect to be an attack, it takes a fairly obscure setup (victim and evil repository on the same host, host speaks dumb http, and evil repository has access to edit its own http-alternates file). So we could make the checks more specific, and only cover cross-server redirects. But that means parsing the URLs ourselves, rather than letting curl handle them. This patch goes for the simpler approach. Given that they are only used with dumb http, http-alternates are probably pretty rare. And there's an escape hatch: the user can allow redirects on a specific server by setting http.<url>.followRedirects to "always". Reported-by: Jann Horn <jannh@google.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-06 19:24:45 +01:00
# The goal is for a clone of the "evil" repository, which has no objects
# itself, to cause the client to fetch objects from the "victim" repository.
test_expect_success 'set up evil alternates scheme' '
victim=$HTTPD_DOCUMENT_ROOT_PATH/victim.git &&
git init --bare "$victim" &&
git -C "$victim" --work-tree=. commit --allow-empty -m secret &&
git -C "$victim" repack -ad &&
git -C "$victim" update-server-info &&
sha1=$(git -C "$victim" rev-parse HEAD) &&
evil=$HTTPD_DOCUMENT_ROOT_PATH/evil.git &&
git init --bare "$evil" &&
# do this by hand to avoid object existence check
printf "%s\\t%s\\n" $sha1 refs/heads/master >"$evil/info/refs"
'
# Here we'll just redirect via HTTP. In a real-world attack these would be on
# different servers, but we should reject it either way.
test_expect_success 'http-alternates is a non-initial redirect' '
echo "$HTTPD_URL/dumb/victim.git/objects" \
>"$evil/objects/info/http-alternates" &&
test_must_fail git -c http.followRedirects=initial \
clone $HTTPD_URL/dumb/evil.git evil-initial &&
git -c http.followRedirects=true \
clone $HTTPD_URL/dumb/evil.git evil-initial
'
# Curl supports a lot of protocols that we'd prefer not to allow
# http-alternates to use, but it's hard to test whether curl has
# accessed, say, the SMTP protocol, because we are not running an SMTP server.
# But we can check that it does not allow access to file://, which would
# otherwise allow this clone to complete.
test_expect_success 'http-alternates cannot point at funny protocols' '
echo "file://$victim/objects" >"$evil/objects/info/http-alternates" &&
test_must_fail git -c http.followRedirects=true \
clone "$HTTPD_URL/dumb/evil.git" evil-file
'
http: respect protocol.*.allow=user for http-alternates The http-walker may fetch the http-alternates (or alternates) file from a remote in order to find more objects. This should count as a "not from the user" use of the protocol. But because we implement the redirection ourselves and feed the new URL to curl, it will use the CURLOPT_PROTOCOLS rules, not the more restrictive CURLOPT_REDIR_PROTOCOLS. The ideal solution would be for each curl request we make to know whether or not is directly from the user or part of an alternates redirect, and then set CURLOPT_PROTOCOLS as appropriate. However, that would require plumbing that information through all of the various layers of the http code. Instead, let's check the protocol at the source: when we are parsing the remote http-alternates file. The only downside is that if there's any mismatch between what protocol we think it is versus what curl thinks it is, it could violate the policy. To address this, we'll make the parsing err on the picky side, and only allow protocols that it can parse definitively. So for example, you can't elude the "http" policy by asking for "HTTP://", even though curl might handle it; we would reject it as unknown. The only unsafe case would be if you have a URL that starts with "http://" but curl interprets as another protocol. That seems like an unlikely failure mode (and we are still protected by our base CURLOPT_PROTOCOL setting, so the worst you could do is trigger one of https, ftp, or ftps). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-14 23:39:55 +01:00
test_expect_success 'http-alternates triggers not-from-user protocol check' '
echo "$HTTPD_URL/dumb/victim.git/objects" \
>"$evil/objects/info/http-alternates" &&
test_config_global http.followRedirects true &&
test_must_fail git -c protocol.http.allow=user \
clone $HTTPD_URL/dumb/evil.git evil-user &&
git -c protocol.http.allow=always \
clone $HTTPD_URL/dumb/evil.git evil-user
'
stop_httpd
test_done