Commit Graph

28 Commits

Author SHA1 Message Date
Brian Gernhardt
580cf0a02e t5551: Remove header from curl cookie file
The URL included in the header appears to vary from curl version to
curl version.  Since we only care about the final few lines, only test
them.  However, make sure the blank line after the header is still
included to make sure there are no extra cookie lines.

Signed-off-by: Brian Gernhardt <brian@gernhardtsoftware.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-05 11:02:53 -07:00
Dave Borowitz
912b2acf2f http: add http.savecookies option to write out HTTP cookies
HTTP servers may send Set-Cookie headers in a response and expect them
to be set on subsequent requests. By default, libcurl behavior is to
store such cookies in memory and reuse them across requests within a
single session. However, it may also make sense, depending on the
server and the cookies, to store them across sessions. Provide users
an option to enable this behavior, writing cookies out to the same
file specified in http.cookiefile.

Signed-off-by: Dave Borowitz <dborowitz@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-30 09:19:04 -07:00
Junio C Hamano
aaec1ad08a Merge branch 'jc/t5551-posix-sed-bre'
POSIX fix for a test script.

* jc/t5551-posix-sed-bre:
  t5551: do not use unportable sed '\+'
2013-06-02 15:56:08 -07:00
Junio C Hamano
5e2c7cd2c1 t5551: do not use unportable sed '\+'
The set-up step to prepare a repository with 50000 tags used a
non-porable '\+' to match one-or-more.

The error was not caught because the next test that uses that
repository did not even bother to check if these expected tags were
actually cloned to the resulting repository.

Fix the sed construct to use BRE and update the "clone" test that
wanted to test cloning from such a repository with many refs to
check the resulting repository.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-05-12 15:51:47 -07:00
Junio C Hamano
77354d8cdc Merge branch 'jk/http-dumb-namespaces'
Allow smart-capable HTTP servers to be restricted via the
GIT_NAMESPACE mechanism when talking with commit-walker clients
(they already do so when talking with smart HTTP clients).

* jk/http-dumb-namespaces:
  http-backend: respect GIT_NAMESPACE with dumb clients
2013-04-18 11:49:21 -07:00
Junio C Hamano
54a3c67375 Merge branch 'jc/push-2.0-default-to-simple' (early part)
Adjust our tests for upcoming migration of the default value for the
"push.default" configuration variable to "simple" from "mixed".

* 'jc/push-2.0-default-to-simple' (early part):
  t5570: do not assume the "matching" push is the default
  t5551: do not assume the "matching" push is the default
  t5550: do not assume the "matching" push is the default
  t9401: do not assume the "matching" push is the default
  t9400: do not assume the "matching" push is the default
  t7406: do not assume the "matching" push is the default
  t5531: do not assume the "matching" push is the default
  t5519: do not assume the "matching" push is the default
  t5517: do not assume the "matching" push is the default
  t5516: do not assume the "matching" push is the default
  t5505: do not assume the "matching" push is the default
  t5404: do not assume the "matching" push is the default
2013-04-18 11:47:59 -07:00
John Koleszar
6130f86dea http-backend: respect GIT_NAMESPACE with dumb clients
Filter the list of refs returned via the dumb HTTP protocol according
to the active namespace, consistent with other clients of the
upload-pack service.

Signed-off-by: John Koleszar <jkoleszar@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-09 18:06:44 -07:00
Brian Gernhardt
bd7ac5990c t5551: do not assume the "matching" push is the default
Signed-off-by: Brian Gernhardt <brian@gernhardtsoftware.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-03 10:02:40 -07:00
Junio C Hamano
af3aec4469 t5551: fix expected error output
We should probably get rid of the check of message instead, but in
the meantime this should do.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-04 16:21:42 -08:00
Shawn Pearce
4656bf47fc Verify Content-Type from smart HTTP servers
Before parsing a suspected smart-HTTP response verify the returned
Content-Type matches the standard. This protects a client from
attempting to process a payload that smells like a smart-HTTP
server response.

JGit has been doing this check on all responses since the dawn of
time. I mistakenly failed to include it in git-core when smart HTTP
was introduced. At the time I didn't know how to get the Content-Type
from libcurl. I punted, meant to circle back and fix this, and just
plain forgot about it.

Signed-off-by: Shawn Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-04 10:22:36 -08:00
Junio C Hamano
a9bb4e55a3 Merge branch 'jk/maint-http-half-auth-fetch'
Fixes fetch from servers that ask for auth only during the actual
packing phase. This is not really a recommended configuration, but it
cleans up the code at the same time.

* jk/maint-http-half-auth-fetch:
  remote-curl: retry failed requests for auth even with gzip
  remote-curl: hoist gzip buffer size to top of post_rpc
2012-11-20 10:30:17 -08:00
Jeff King
2e736fd5e9 remote-curl: retry failed requests for auth even with gzip
Commit b81401c taught the post_rpc function to retry the
http request after prompting for credentials. However, it
did not handle two cases:

  1. If we have a large request, we do not retry. That's OK,
     since we would have sent a probe (with retry) already.

  2. If we are gzipping the request, we do not retry. That
     was considered OK, because the intended use was for
     push (e.g., listing refs is OK, but actually pushing
     objects is not), and we never gzip on push.

This patch teaches post_rpc to retry even a gzipped request.
This has two advantages:

  1. It is possible to configure a "half-auth" state for
     fetching, where the set of refs and their sha1s are
     advertised, but one cannot actually fetch objects.

     This is not a recommended configuration, as it leaks
     some information about what is in the repository (e.g.,
     an attacker can try brute-forcing possible content in
     your repository and checking whether it matches your
     branch sha1). However, it can be slightly more
     convenient, since a no-op fetch will not require a
     password at all.

  2. It future-proofs us should we decide to ever gzip more
     requests.

Signed-off-by: Jeff King <peff@peff.net>
2012-10-31 07:45:13 -04:00
Junio C Hamano
fb1e4a85bf Merge branch 'jk/smart-http-switch'
Allows users to turn off smart-http when talking to dumb-only
servers.

* jk/smart-http-switch:
  remote-curl: let users turn off smart http
  remote-curl: rename is_http variable
2012-09-29 22:28:25 -07:00
Jeff King
02572c2e3a remote-curl: let users turn off smart http
Usually there is no need for users to specify whether an
http remote is smart or dumb; the protocol is designed so
that a single initial request is made, and the client can
determine the server's capability from the response.

However, some misconfigured dumb-only servers may not like
the initial request by a smart client, as it contains a
query string. Until recently, commit 703e6e7 worked around
this by making a second request. However, that commit was
recently reverted due to its side effect of masking the
initial request's error code.

Since git has had that workaround for several years, we
don't know exactly how many such misconfigured servers are
out there. The reversion of 703e6e7 assumes they are rare
enough not to worry about. Still, that reversion leaves
somebody who does run into such a server with no escape
hatch at all. Let's give them an environment variable they
can tweak to perform the "dumb" request.

This is intentionally not a documented interface. It's
overly simple and is really there for debugging in case
somebody does complain about git not working with their
server. A real user-facing interface would entail a
per-remote or per-URL config variable.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-09-21 10:33:11 -07:00
Shawn O. Pearce
aa90b9697f Enable info/refs gzip decompression in HTTP client
Some HTTP servers try to use gzip compression on the /info/refs
request to save transfer bandwidth. Repositories with many tags
may find the /info/refs request can be gzipped to be 50% of the
original size due to the few but often repeated bytes used (hex
SHA-1 and commonly digits in tag names).

For most HTTP requests enable "Accept-Encoding: gzip" ensuring
the /info/refs payload can use this encoding format.

Only request gzip encoding from servers. Although deflate is
supported by libcurl, most servers have standardized on gzip
encoding for compression as that is what most browsers support.
Asking for deflate increases request sizes by a few bytes, but is
unlikely to ever be used by a server.

Disable the Accept-Encoding header on probe RPCs as response bodies
are supposed to be exactly 4 bytes long, "0000". The HTTP headers
requesting and indicating compression use more space than the data
transferred in the body.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-09-20 10:26:50 -07:00
Junio C Hamano
7d9483c299 Merge branch 'jk/maint-http-half-auth-push' into maint-1.7.11
Pushing to smart HTTP server with recent Git fails without having
the username in the URL to force authentication, if the server is
configured to allow GET anonymously, while requiring authentication
for POST.

* jk/maint-http-half-auth-push:
  http: prompt for credentials on failed POST
  http: factor out http error code handling
  t: test http access to "half-auth" repositories
  t: test basic smart-http authentication
  t/lib-httpd: recognize */smart/* repos as smart-http
  t/lib-httpd: only route auth/dumb to dumb repos
  t5550: factor out http auth setup
  t5550: put auth-required repo in auth/dumb
2012-09-12 13:58:23 -07:00
Jeff King
4c71009da6 t: test http access to "half-auth" repositories
Some sites set up http access to repositories such that
fetching is anonymous and unauthenticated, but pushing is
authenticated. While there are multiple ways to do this, the
technique advertised in the git-http-backend manpage is to
block access to locations matching "/git-receive-pack$".

Let's emulate that advice in our test setup, which makes it
clear that this advice does not actually work.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-08-27 10:49:09 -07:00
Jeff King
6ac2b3aeb9 t: test basic smart-http authentication
We do not currently test authentication over smart-http at
all. In theory, it should work exactly as it does for dumb
http (which we do test). It does indeed work for these
simple tests, but this patch lays the groundwork for more
complex tests in future patches.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-08-27 10:49:09 -07:00
Michał Kiedrowicz
d17cf5f3a3 tests: Introduce test_seq
Jeff King wrote:

	The seq command is GNU-ism, and is missing at least in older BSD
	releases and their derivatives, not to mention antique
	commercial Unixes.

	We already purged it in b3431bc (Don't use seq in tests, not
	everyone has it, 2007-05-02), but a few new instances have crept
	in. They went unnoticed because they are in scripts that are not
	run by default.

Replace them with test_seq that is implemented with a Perl snippet
(proposed by Jeff).  This is better than inlining this snippet
everywhere it's needed because it's easier to read and it's easier
to change the implementation (e.g. to C) if we ever decide to remove
Perl from the test suite.

Note that test_seq is not a complete replacement for seq(1).  It
just has what we need now, in addition that it makes it possible for
us to do something like "test_seq a m" if we wanted to in the
future.

There are also many places that do `for i in 1 2 3 ...` but I'm not sure
if it's worth converting them to test_seq.  That would introduce running
more processes of Perl.

Signed-off-by: Michał Kiedrowicz <michal.kiedrowicz@gmail.com>
Acked-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-08-04 16:06:07 -07:00
Junio C Hamano
7096b6486e tests: enclose $PERL_PATH in double quotes
Otherwise it will be split at a space after "Program" when it is set
to "\\Program Files\perl" or something silly like that.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-06-24 21:56:13 -07:00
Vincent van Ravesteijn
a3428205e6 t: Replace 'perl' by $PERL_PATH
GIT-BUILD-OPTIONS defines PERL_PATH to be used in the test suite. Only a
few tests already actually use this variable when perl is needed. The
other test just call 'perl' and it might happen that the wrong perl
interpreter is used.

This becomes problematic on Windows, when the perl interpreter that is
compiled and installed on the Windows system is used, because this perl
interpreter might introduce some unexpected LF->CRLF conversions.

This patch makes sure that $PERL_PATH is used everywhere in the test suite
and that the correct perl interpreter is used.

Signed-off-by: Vincent van Ravesteijn <vfr@lyx.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-06-12 09:30:41 -07:00
Ivan Todoroski
7103d2543a remote-curl: main test case for the OS command line overflow
This is main test case for the original problem that triggered this
patch series. We create a repo with 50k tags and then test whether
git-clone over the smart HTTP protocol succeeds.

Note that we construct the repo in a slightly different way than the
original script used to reproduce the problem. This is because the
original script just created 50k tags all pointing to the same commit,
so if there was a bug where remote-curl.c was not passing all the refs
to fetch-pack we wouldn't know. The clone would succeed even if only one
tag was passed, because all the other tags were pointing at the same SHA
and would be considered present.

Instead we create a repo with 50k independent (dangling) commits and
then tag each of those commits with a unique tag. This way if one of the
tags is not given to fetch-pack, later stages of the clone would
complain about it.

This allows us to test both that the command line overflow was fixed, as
well as that it was fixed in a way that doesn't leave out any of the
refs.

Signed-off-by: Ivan Todoroski <grnch@gmx.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-10 14:49:18 -07:00
Tay Ray Chuan
311e2ea006 smart-http: Don't change POST to GET when following redirect
For a long time (29508e1 "Isolate shared HTTP request functionality", Fri
Nov 18 11:02:58 2005), we've followed HTTP redirects with
CURLOPT_FOLLOWLOCATION.

However, when the remote HTTP server returns a redirect the default
libcurl action is to change a POST request into a GET request while
following the redirect, but the remote http backend does not expect
that.

Fix this by telling libcurl to always keep the request as type POST with
CURLOPT_POSTREDIR.

For users of libcurl older than 7.19.1, use CURLOPT_POST301 instead,
which only follows 301s instead of both 301s and 302s.

Signed-off-by: Andreas Schwab <schwab@linux-m68k.org>
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-09-27 11:38:55 -07:00
Ævar Arnfjörð Bjarmason
fadb5156e4 tests: Skip tests in a way that makes sense under TAP
SKIP messages are now part of the TAP plan. A TAP harness now knows
why a particular test was skipped and can report that information. The
non-TAP harness built into Git's test-lib did nothing special with
these messages, and is unaffected by these changes.

Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-06-25 10:08:20 -07:00
Shawn O. Pearce
8efa5f629e remote-curl: Fix Accept header for smart HTTP connections
We actually expect to see an application/x-git-upload-pack-result
but we lied and said we Accept *-response.  This was a typo on my
part when I was writing the code.

Fortunately the wrong Accept header had no real impact, as the
deployed git-http-backend servers were not testing the Accept
header before they returned their content.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-01-12 13:09:44 -08:00
Shawn O. Pearce
203666352f t5551-http-fetch: Work around broken Accept header in libcurl
Unfortunately at least one version of libcurl has a bug causing
it to include "Accept: */*" in the same POST request where we have
already asked for "Accept: application/x-git-upload-pack-response".

This is a bug in libcurl, not Git, or our test vector.  The
application has explicitly asked the server for a single content
type, but libcurl has mistakenly also told the server the client
application will accept */*, which is any content type.

Based on the libcurl change log, this "Accept: */*" header bug
may have been fixed in version 7.18.1 released March 30, 2008:

  http://curl.haxx.se/changes.html#7_18_1

Rather than require users to upgrade libcurl we change the test
vector to trim this line out of the 2nd request.

Reported-by: Tarmigan <tarmigan+git@gmail.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-09 16:41:13 -08:00
Shawn O. Pearce
0a8fcbdca2 t5551-http-fetch: Work around some libcurl versions
Some versions of libcurl report their output when GIT_CURL_VERBOSE
is set differently than other versions do.  At least one variant
(version unknown but likely pre-7.18.1) reports the POST payload to
stderr, and omits the blank line after each HTTP request/response.
We clip these lines out of the stderr output now before doing the
compare, so we aren't surprised by this trivial difference.

Reported-by: Tarmigan <tarmigan+git@gmail.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-09 16:40:49 -08:00
Shawn O. Pearce
7da4e2280c test smart http fetch and push
The top level directory "/smart/" of the test Apache server is mapped
through our git-http-backend CGI, but uses the same underlying
repository space as the server's document root.  This is the most
simple installation possible.

Server logs are checked to verify the client has accessed only the
smart URLs during the test.  During fetch testing the headers are
also logged from libcurl to ensure we are making a reasonably sane
HTTP request, and getting back reasonably sane response headers
from the CGI.

When validating the request headers used during smart fetch we munge
away the actual Content-Length and replace it with the placeholder
"xxx".  This avoids unnecessary varability in the test caused by
an unrelated change in the requested capabilities in the first want
line of the request.  However, we still want to look for and verify
that Content-Length was used, because smaller payloads should be
using Content-Length and not "Transfer-Encoding: chunked".

When validating the server response headers we must discard both
Content-Length and Transfer-Encoding, as Apache2 can use either
format to return our response.

During development of this test I observed Apache returning both
forms, depending on when the processes got CPU time.  If our CGI
returned the pack data quickly, Apache just buffered the whole
thing and returned a Content-Length.  If our CGI took just a bit
too long to complete, Apache flushed its buffer and instead used
"Transfer-Encoding: chunked".

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04 17:58:16 -08:00