Commit Graph

256 Commits

Author SHA1 Message Date
Jeff King
b0e3e2d06b http: prefer CURLOPT_SEEKFUNCTION to CURLOPT_IOCTLFUNCTION
The IOCTLFUNCTION option has been deprecated, and generates a compiler
warning in recent versions of curl. We can switch to using SEEKFUNCTION
instead. It was added in 2008 via curl 7.18.0; our INSTALL file already
indicates we require at least curl 7.19.4.

But there's one catch: curl says we should use CURL_SEEKFUNC_{OK,FAIL},
and those didn't arrive until 7.19.5. One workaround would be to use a
bare 0/1 here (or define our own macros).  But let's just bump the
minimum required version to 7.19.5. That version is only a minor version
bump from our existing requirement, and is only a 2 month time bump for
versions that are almost 13 years old. So it's not likely that anybody
cares about the distinction.

Switching means we have to rewrite the ioctl functions into seek
functions. In some ways they are simpler (seeking is the only
operation), but in some ways more complex (the ioctl allowed only a full
rewind, but now we can seek to arbitrary offsets).

Curl will only ever use SEEK_SET (per their documentation), so I didn't
bother implementing anything else, since it would naturally be
completely untested. This seems unlikely to change, but I added an
assertion just in case.

Likewise, I doubt curl will ever try to seek outside of the buffer sizes
we've told it, but I erred on the defensive side here, rather than do an
out-of-bounds read.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
2023-03-12 20:31:54 +01:00
Srinidhi Kaushik
3b990aa645 push: parse and set flag for "--force-if-includes"
The previous commit added the necessary machinery to implement the
"--force-if-includes" protection, when "--force-with-lease" is used
without giving exact object the remote still ought to have. Surface
the feature by adding a command line option and a configuration
variable to enable it.

 - Add a flag: "TRANSPORT_PUSH_FORCE_IF_INCLUDES" to indicate that the
   new option was passed from the command line of via configuration
   settings; update command line and configuration parsers to set the
   new flag accordingly.

 - Introduce a new configuration option "push.useForceIfIncludes", which
   is equivalent to setting "--force-if-includes" in the command line.

 - Update "remote-curl" to recognize and pass this option to "send-pack"
   when enabled.

 - Update "advise" to catch the reject reason "REJECT_REF_NEEDS_UPDATE",
   set when the ref status is "REF_STATUS_REJECT_REMOTE_UPDATED" and
   (optionally) print a help message when the push fails.

 - The new option is a "no-op" in the following scenarios:
    * When used without "--force-with-lease".
    * When used with "--force-with-lease", and if the expected commit
      on the remote side is specified as an argument.

Signed-off-by: Srinidhi Kaushik <shrinidhi.kaushik@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-10-03 09:59:19 -07:00
Junio C Hamano
b4100f366c Merge branch 'jt/lazy-fetch'
Updates to on-demand fetching code in lazily cloned repositories.

* jt/lazy-fetch:
  fetch: no FETCH_HEAD display if --no-write-fetch-head
  fetch-pack: remove no_dependents code
  promisor-remote: lazy-fetch objects in subprocess
  fetch-pack: do not lazy-fetch during ref iteration
  fetch: only populate existing_refs if needed
  fetch: avoid reading submodule config until needed
  fetch: allow refspecs specified through stdin
  negotiator/noop: add noop fetch negotiator
2020-09-03 12:37:04 -07:00
Jonathan Tan
42d418df90 (various): document from_promisor parameter
88e2f9ed8e ("introduce fetch-object: fetch one promisor object",
2017-12-05) plumbed through the from_promisor parameter but did
not document it everywhere it appeared. Add the documentation.

(It also plumbed through the no_dependents parameter, but I have left
that alone because it is being removed in a commit under review [1].)

[1] https://lore.kernel.org/git/e8f16d69089a5011c355d5939c56fa53b7a1eb2d.1597184949.git.jonathantanmy@google.com/

Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-24 17:30:08 -07:00
Jonathan Tan
9dfa8dbeee fetch-pack: remove no_dependents code
Now that Git has switched to using a subprocess to lazy-fetch missing
objects, remove the no_dependents code as it is no longer used.

Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-18 16:46:53 -07:00
Junio C Hamano
46b225f153 Merge branch 'jk/strvec'
The argv_array API is useful for not just managing argv but any
"vector" (NULL-terminated array) of strings, and has seen adoption
to a certain degree.  It has been renamed to "strvec" to reduce the
barrier to adoption.

* jk/strvec:
  strvec: rename struct fields
  strvec: drop argv_array compatibility layer
  strvec: update documention to avoid argv_array
  strvec: fix indentation in renamed calls
  strvec: convert remaining callers away from argv_array name
  strvec: convert more callers away from argv_array name
  strvec: convert builtin/ callers away from argv_array name
  quote: rename sq_dequote_to_argv_array to mention strvec
  strvec: rename files from argv-array to strvec
  argv-array: rename to strvec
  argv-array: use size_t for count and alloc
2020-08-10 10:23:57 -07:00
Jeff King
d70a9eb611 strvec: rename struct fields
The "argc" and "argv" names made sense when the struct was argv_array,
but now they're just confusing. Let's rename them to "nr" (which we use
for counts elsewhere) and "v" (which is rather terse, but reads well
when combined with typical variable names like "args.v").

Note that we have to update all of the callers immediately. Playing
tricks with the preprocessor is hard here, because we wouldn't want to
rewrite unrelated tokens.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-30 19:18:06 -07:00
Junio C Hamano
c2796ac1c2 Merge branch 'bc/push-cas-cquoted-refname' into master
Pushing a ref whose name contains non-ASCII character with the
"--force-with-lease" option did not work over smart HTTP protocol,
which has been corrected.

* bc/push-cas-cquoted-refname:
  remote-curl: make --force-with-lease work with non-ASCII ref names
2020-07-30 13:20:34 -07:00
Jeff King
f6d8942b1f strvec: fix indentation in renamed calls
Code which split an argv_array call across multiple lines, like:

  argv_array_pushl(&args, "one argument",
                   "another argument", "and more",
		   NULL);

was recently mechanically renamed to use strvec, which results in
mis-matched indentation like:

  strvec_pushl(&args, "one argument",
                   "another argument", "and more",
		   NULL);

Let's fix these up to align the arguments with the opening paren. I did
this manually by sifting through the results of:

  git jump grep 'strvec_.*,$'

and liberally applying my editor's auto-format. Most of the changes are
of the form shown above, though I also normalized a few that had
originally used a single-tab indentation (rather than our usual style of
aligning with the open paren). I also rewrapped a couple of obvious
cases (e.g., where previously too-long lines became short enough to fit
on one), but I wasn't aggressive about it. In cases broken to three or
more lines, the grouping of arguments is sometimes meaningful, and it
wasn't worth my time or reviewer time to ponder each case individually.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-28 15:02:18 -07:00
Jeff King
c972bf4cf5 strvec: convert remaining callers away from argv_array name
We eventually want to drop the argv_array name and just use strvec
consistently. There's no particular reason we have to do it all at once,
or care about interactions between converted and unconverted bits.
Because of our preprocessor compat layer, the names are interchangeable
to the compiler (so even a definition and declaration using different
names is OK).

This patch converts all of the remaining files, as the resulting diff is
reasonably sized.

The conversion was done purely mechanically with:

  git ls-files '*.c' '*.h' |
  xargs perl -i -pe '
    s/ARGV_ARRAY/STRVEC/g;
    s/argv_array/strvec/g;
  '

We'll deal with any indentation/style fallouts separately.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-28 15:02:18 -07:00
Jeff King
dbbcd44fb4 strvec: rename files from argv-array to strvec
This requires updating #include lines across the code-base, but that's
all fairly mechanical, and was done with:

  git ls-files '*.c' '*.h' |
  xargs perl -i -pe 's/argv-array.h/strvec.h/'

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-28 15:02:17 -07:00
brian m. carlson
cd85b447bf remote-curl: make --force-with-lease work with non-ASCII ref names
When we invoke a remote transport helper and pass an option with an
argument, we quote the argument as a C-style string if necessary.  This
is the case for the cas option, which implements the --force-with-lease
command-line flag, when we're passing a non-ASCII refname.

However, the remote curl helper isn't designed to parse such an
argument, meaning that if we try to use --force-with-lease with an HTTP
push and a non-ASCII refname, we get an error like this:

  error: cannot parse expected object name '0000000000000000000000000000000000000000"'

Note the double quote, which get_oid has reminded us is not valid in an
hex object ID.

Even if we had been able to parse it, we would send the wrong data to
the server: we'd send an escaped ref, which would not behave as the user
wanted and might accidentally result in updating or deleting a ref we
hadn't intended.

Since we need to expect a quoted C-style string here, just check if the
first argument is a double quote, and if so, unquote it.  Note that if
the refname contains a double quote, then we will have double-quoted it
already, so there is no ambiguity.

We test for this case only in the smart protocol, since the DAV-based
protocol is not capable of handling this capability.  We use UTF-8
because this is nicer in our tests and friendlier to Windows, but the
code should work for all non-ASCII refs.

While we're at it, since the name of the option is now well established
and isn't going to change, let's inline it instead of using the #define
constant.

Reported-by: Frej Bjon <frej.bjon@nemit.fi>
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-20 21:05:16 -07:00
Junio C Hamano
12210859da Merge branch 'bc/sha-256-part-2'
SHA-256 migration work continues.

* bc/sha-256-part-2: (44 commits)
  remote-testgit: adapt for object-format
  bundle: detect hash algorithm when reading refs
  t5300: pass --object-format to git index-pack
  t5704: send object-format capability with SHA-256
  t5703: use object-format serve option
  t5702: offer an object-format capability in the test
  t/helper: initialize the repository for test-sha1-array
  remote-curl: avoid truncating refs with ls-remote
  t1050: pass algorithm to index-pack when outside repo
  builtin/index-pack: add option to specify hash algorithm
  remote-curl: detect algorithm for dumb HTTP by size
  builtin/ls-remote: initialize repository based on fetch
  t5500: make hash independent
  serve: advertise object-format capability for protocol v2
  connect: parse v2 refs with correct hash algorithm
  connect: pass full packet reader when parsing v2 refs
  Documentation/technical: document object-format for protocol v2
  t1302: expect repo format version 1 for SHA-256
  builtin/show-index: provide options to determine hash algo
  t5302: modernize test formatting
  ...
2020-07-06 22:09:13 -07:00
brian m. carlson
97997e6ad2 remote-curl: avoid truncating refs with ls-remote
Normally, the remote-curl transport helper is aware of the hash
algorithm we're using because we're in a repo with the appropriate hash
algorithm set. However, when using git ls-remote outside of a
repository, we won't have initialized the hash algorithm properly, so
use hash_to_hex_algop to print the ref corresponding to the algorithm
we've detected.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-06-19 14:04:08 -07:00
brian m. carlson
ac093d0790 remote-curl: detect algorithm for dumb HTTP by size
When reading the info/refs file for a repository, we have no explicit
way to detect which hash algorithm is in use because the file doesn't
provide one. Detect the hash algorithm in use by the size of the first
object ID.

If we have an empty repository, we don't know what the hash algorithm is
on the remote side, so default to whatever the local side has
configured.  Without doing this, we cannot clone an empty repository
since we don't know its hash algorithm.  Test this case appropriately,
since we currently have no tests for cloning an empty repository with
the dumb HTTP protocol.

We anonymize the URL like elsewhere in the function in case the user has
decided to include a secret in the URL.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-06-19 14:04:08 -07:00
brian m. carlson
7f60501775 remote-curl: implement object-format extensions
Implement the object-format extensions that let us determine the hash
algorithm in use when pushing, pulling, and fetching.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-27 10:07:06 -07:00
Denton Liu
b0df0c16ea stateless-connect: send response end packet
Currently, remote-curl acts as a proxy and blindly forwards packets
between an HTTP server and fetch-pack. In the case of a stateless RPC
connection where the connection is terminated before the transaction is
complete, remote-curl will blindly forward the packets before waiting on
more input from fetch-pack. Meanwhile, fetch-pack will read the
transaction and continue reading, expecting more input to continue the
transaction. This results in a deadlock between the two processes.

This can be seen in the following command which does not terminate:

	$ git -c protocol.version=2 clone https://github.com/git/git.git --shallow-since=20151012
	Cloning into 'git'...

whereas the v1 version does terminate as expected:

	$ git -c protocol.version=1 clone https://github.com/git/git.git --shallow-since=20151012
	Cloning into 'git'...
	fatal: the remote end hung up unexpectedly

Instead of blindly forwarding packets, make remote-curl insert a
response end packet after proxying the responses from the remote server
when using stateless_connect(). On the RPC client side, ensure that each
response ends as described.

A separate control packet is chosen because we need to be able to
differentiate between what the remote server sends and remote-curl's
control packets. By ensuring in the remote-curl code that a server
cannot send response end packets, we prevent a malicious server from
being able to perform a denial of service attack in which they spoof a
response end packet and cause the described deadlock to happen.

Reported-by: Force Charlie <charlieio@outlook.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Denton Liu <liu.denton@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-24 16:26:00 -07:00
Denton Liu
0181b600a6 pkt-line: define PACKET_READ_RESPONSE_END
In a future commit, we will use PACKET_READ_RESPONSE_END to separate
messages proxied by remote-curl. To prepare for this, add the
PACKET_READ_RESPONSE_END enum value.

In switch statements that need a case added, die() or BUG() when a
PACKET_READ_RESPONSE_END is unexpected. Otherwise, mirror how
PACKET_READ_DELIM is implemented (especially in cases where packets are
being forwarded).

Signed-off-by: Denton Liu <liu.denton@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-24 16:26:00 -07:00
Denton Liu
74b082ad34 remote-curl: error on incomplete packet
Currently, remote-curl acts as a proxy and blindly forwards packets
between an HTTP server and fetch-pack. In the case of a stateless RPC
connection where the connection is terminated with a partially written
packet, remote-curl will blindly send the partially written packet
before waiting on more input from fetch-pack. Meanwhile, fetch-pack will
read the partial packet and continue reading, expecting more input. This
results in a deadlock between the two processes.

For a stateless connection, inspect packets before sending them and
error out if a packet line packet is incomplete.

Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Denton Liu <liu.denton@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-24 16:26:00 -07:00
Denton Liu
04cc91abcb remote-curl: remove label indentation
In the codebase, labels are aligned to the leftmost column. Remove the
space-indentation from `free_specs:` to conform to this.

Signed-off-by: Denton Liu <liu.denton@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-18 11:35:06 -07:00
Denton Liu
51ca7f89f8 remote-curl: fix typo
Signed-off-by: Denton Liu <liu.denton@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-18 11:34:24 -07:00
Jeff King
fe299ec5ae oid_array: rename source file from sha1-array
We renamed the actual data structure in 910650d2f8 (Rename sha1_array to
oid_array, 2017-03-31), but the file is still called sha1-array. Besides
being slightly confusing, it makes it more annoying to grep for leftover
occurrences of "sha1" in various files, because the header is included
in so many places.

Let's complete the transition by renaming the source and header files
(and fixing up a few comment references).

I kept the "-" in the name, as that seems to be our style; cf.
fc1395f4a4 (sha1_file.c: rename to use dash in file name, 2018-04-10).
We also have oidmap.h and oidset.h without any punctuation, but those
are "struct oidmap" and "struct oidset" in the code. We _could_ make
this "oidarray" to match, but somehow it looks uglier to me because of
the length of "array" (plus it would be a very invasive patch for little
gain).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-03-30 10:59:08 -07:00
Junio C Hamano
4a5c3e10f2 Merge branch 'rs/show-progress-in-dumb-http-fetch'
"git fetch" over HTTP walker protocol did not show any progress
output.  We inherently do not know how much work remains, but still
we can show something not to bore users.

* rs/show-progress-in-dumb-http-fetch:
  remote-curl: show progress for fetches over dumb HTTP
2020-03-09 11:21:21 -07:00
René Scharfe
7655b4119d remote-curl: show progress for fetches over dumb HTTP
Fetching over dumb HTTP transport doesn't show any progress, even with
the option --progress.  If the connection is slow or there is a lot of
data to get then this can take a long time while the user is left to
wonder if git got stuck.

We don't know the number of objects to fetch at the outset, but we can
count the ones we got.  Show an open-ended progress indicator based on
that number if the user asked for it.

Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-03-03 13:15:40 -08:00
Junio C Hamano
145136a95a C: use skip_prefix() to avoid hardcoded string length
We often skip an optional prefix in a string with a hardcoded
constant, e.g.

	if (starts_with(string, "prefix"))
		string += 6;

which is less error prone when written

	skip_prefix(string, "prefix", &string);

Note that this changes a few error messages from "git reflog expire
--expire=nonsense.timestamp", which used to complain by saying

    '--expire=nonsense.timestamp' is not a valid timestamp

but with this change, we say

    'nonsense.timestamp' is not a valid timestamp

which is more technically correct (the string with --expire= as
a prefix obviously cannot be a valid timestamp, but the error is
about the part of the input without that prefix).

Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-01-31 13:03:45 -08:00
Junio C Hamano
d45d771978 Merge branch 'bc/smart-http-atomic-push'
The atomic push over smart HTTP transport did not work, which has
been corrected.

* bc/smart-http-atomic-push:
  remote-curl: pass on atomic capability to remote side
2019-10-23 14:43:11 +09:00
brian m. carlson
6f1194246a remote-curl: pass on atomic capability to remote side
When pushing more than one reference with the --atomic option, the
server is supposed to perform a single atomic transaction to update the
references, leaving them either all to succeed or all to fail.  This
works fine when pushing locally or over SSH, but when pushing over HTTP,
we fail to pass the atomic capability to the remote side.  In fact, we
have not reported this capability to any remote helpers during the life
of the feature.

Now normally, things happen to work nevertheless, since we actually
check for most types of failures, such as non-fast-forward updates, on
the client side, and just abort the entire attempt.  However, if the
server side reports a problem, such as the inability to lock a ref, the
transaction isn't atomic, because we haven't passed the appropriate
capability over and the remote side has no way of knowing that we wanted
atomic behavior.

Fix this by passing the option from the transport code through to remote
helpers, and from the HTTP remote helper down to send-pack.  With this
change, we can detect if the server side rejects the push and report
back appropriately.  Note the difference in the messages: the remote
side reports "atomic transaction failed", while our own checking rejects
pushes with the message "atomic push failed".

Document the atomic option in the remote helper documentation, so other
implementers can implement it if they like.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-17 16:08:22 +09:00
René Scharfe
062a309d36 remote-curl: use argv_array in parse_push()
Use argv_array to build an array of strings instead of open-coding it.
This simplifies the code a bit.

We also need to make the specs parameter of push(), push_dav() and
push_git() const to match the argv member of the argv_array.  That's
fine, as all three only actually read from the specs array anyway.

Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-15 10:55:11 +09:00
Jiang Xin
8a1569d655 i18n: fix typos found during l10n for git 2.22.0
Fix two typos introduced by the following commits:

+ 31fba9d3b4 (diff-parseopt: convert --[src|dst]-prefix, 2019-03-24)
+ ed8b4132c8 (remote-curl: mark all error messages for translation,
  2019-03-05)

Signed-off-by: Jiang Xin <worldhello.net@gmail.com>
Acked-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-03 11:10:53 -07:00
Junio C Hamano
d4e568b2a3 Merge branch 'bc/hash-transition-16'
Conversion from unsigned char[20] to struct object_id continues.

* bc/hash-transition-16: (35 commits)
  gitweb: make hash size independent
  Git.pm: make hash size independent
  read-cache: read data in a hash-independent way
  dir: make untracked cache extension hash size independent
  builtin/difftool: use parse_oid_hex
  refspec: make hash size independent
  archive: convert struct archiver_args to object_id
  builtin/get-tar-commit-id: make hash size independent
  get-tar-commit-id: parse comment record
  hash: add a function to lookup hash algorithm by length
  remote-curl: make hash size independent
  http: replace sha1_to_hex
  http: compute hash of downloaded objects using the_hash_algo
  http: replace hard-coded constant with the_hash_algo
  http-walker: replace sha1_to_hex
  http-push: remove remaining uses of sha1_to_hex
  http-backend: allow 64-character hex names
  http-push: convert to use the_hash_algo
  builtin/pull: make hash-size independent
  builtin/am: make hash size independent
  ...
2019-04-25 16:41:17 +09:00
Junio C Hamano
aa1edf14f9 Merge branch 'js/remote-curl-i18n'
Error messages given from the http transport have been updated so
that they can be localized.

* js/remote-curl-i18n:
  remote-curl: mark all error messages for translation
2019-04-16 19:28:05 +09:00
Junio C Hamano
764bd200e1 Merge branch 'js/anonymize-remote-curl-diag'
remote-http transport did not anonymize URLs reported in its error
messages at places.

* js/anonymize-remote-curl-diag:
  curl: anonymize URLs in error messages and warnings
2019-04-16 19:28:04 +09:00
brian m. carlson
9c9492e8aa remote-curl: make hash size independent
Change one hard-coded use of the constant 40 to a reference to
the_hash_algo.  In addition, switch a use of get_oid_hex to
parse_oid_hex to avoid the need to use a constant.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-04-01 11:57:39 +09:00
Junio C Hamano
c42c664a2f Merge branch 'jt/http-auth-proto-v2-fix'
Unify RPC code for smart http in protocol v0/v1 and v2, which fixes
a bug in the latter (lack of authentication retry) and generally
improves the code base.

* jt/http-auth-proto-v2-fix:
  remote-curl: use post_rpc() for protocol v2 also
  remote-curl: refactor reading into rpc_state's buf
  remote-curl: reduce scope of rpc_state.result
  remote-curl: reduce scope of rpc_state.stdin_preamble
  remote-curl: reduce scope of rpc_state.argv
2019-03-07 10:00:00 +09:00
Johannes Schindelin
ed8b4132c8 remote-curl: mark all error messages for translation
Suggested by Jeff King.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-03-06 08:48:15 +09:00
Johannes Schindelin
c1284b21f2 curl: anonymize URLs in error messages and warnings
Just like 47abd85ba0 (fetch: Strip usernames from url's before storing
them, 2009-04-17) and later 882d49ca5c (push: anonymize URL in status
output, 2016-07-13), this change anonymizes URLs (read: strips them of
user names and especially passwords) in user-facing error messages and
warnings.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-03-05 22:11:58 +09:00
Jonathan Tan
a97d00799a remote-curl: use post_rpc() for protocol v2 also
When transmitting and receiving POSTs for protocol v0 and v1,
remote-curl uses post_rpc() (and associated functions), but when doing
the same for protocol v2, it uses a separate set of functions
(proxy_rpc() and others). Besides duplication of code, this has caused
at least one bug: the auth retry mechanism that was implemented in v0/v1
was not implemented in v2.

To fix this issue and avoid it in the future, make remote-curl also use
post_rpc() when handling protocol v2. Because line lengths are written
to the HTTP request in protocol v2 (unlike in protocol v0/v1), this
necessitates changes in post_rpc() and some of the functions it uses;
perform these changes too.

A test has been included to ensure that the code for both the unchunked
and chunked variants of the HTTP request is exercised.

Note: stateless_connect() has been updated to use the lower-level packet
reading functions instead of struct packet_reader. The low-level control
is necessary here because we cannot change the destination buffer of
struct packet_reader while it is being used; struct packet_buffer has a
peeking mechanism which relies on the destination buffer being present
in between a peek and a read.

Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-03-03 19:00:42 +09:00
Jeff Hostetler
ee4512ed48 trace2: create new combined trace facility
Create a new unified tracing facility for git.  The eventual intent is to
replace the current trace_printf* and trace_performance* routines with a
unified set of git_trace2* routines.

In addition to the usual printf-style API, trace2 provides higer-level
event verbs with fixed-fields allowing structured data to be written.
This makes post-processing and analysis easier for external tools.

Trace2 defines 3 output targets.  These are set using the environment
variables "GIT_TR2", "GIT_TR2_PERF", and "GIT_TR2_EVENT".  These may be
set to "1" or to an absolute pathname (just like the current GIT_TRACE).

* GIT_TR2 is intended to be a replacement for GIT_TRACE and logs command
  summary data.

* GIT_TR2_PERF is intended as a replacement for GIT_TRACE_PERFORMANCE.
  It extends the output with columns for the command process, thread,
  repo, absolute and relative elapsed times.  It reports events for
  child process start/stop, thread start/stop, and per-thread function
  nesting.

* GIT_TR2_EVENT is a new structured format. It writes event data as a
  series of JSON records.

Calls to trace2 functions log to any of the 3 output targets enabled
without the need to call different trace_printf* or trace_performance*
routines.

Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-22 15:27:59 -08:00
Jonathan Tan
78ad91728d remote-curl: refactor reading into rpc_state's buf
Currently, whenever remote-curl reads pkt-lines from its response file
descriptor, only the payload is written to its buf, not the 4 characters
denoting the length. A future patch will require the ability to also
write those 4 characters, so in preparation for that, refactor this read
into its own function.

Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-22 14:27:31 -08:00
Jonathan Tan
b35903092e remote-curl: reduce scope of rpc_state.result
The result field in struct rpc_state is only used in rpc_service(), and
not in any functions it directly or indirectly calls. Refactor it to
become an argument of rpc_service() instead.

Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-14 12:47:55 -08:00
Jonathan Tan
5d91669309 remote-curl: reduce scope of rpc_state.stdin_preamble
The stdin_preamble field in struct rpc_state is only used in
rpc_service(), and not in any functions it directly or indirectly calls.
Refactor it to become an argument of rpc_service() instead.

An observation of all callers of rpc_service() shows that the preamble
is always set, so we no longer need the "if (preamble)" check.

Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-14 12:47:55 -08:00
Jonathan Tan
7d50d34fc7 remote-curl: reduce scope of rpc_state.argv
The argv field in struct rpc_state is only used in rpc_service(), and
not in any functions it directly or indirectly calls. Refactor it to
become an argument of rpc_service() instead.

Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-14 12:47:55 -08:00
Jeff King
cbdb8d1439 remote-curl: tighten "version 2" check for smart-http
In a v2 smart-http conversation, the server should reply to our initial
request with a pkt-line saying "version 2". We check that with
starts_with(), but really that should be the only thing in that packet.
A response of "version 20" should not match.

Let's tighten this check to use strcmp(). Note that we don't need to
worry about a trailing newline here, because the ptk-line code will have
chomped it for us already.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-06 12:20:22 -08:00
Jeff King
8ee3e120cd remote-curl: refactor smart-http discovery
After making initial contact with an http server, we have to decide if
the server supports smart-http, and if so, which version. Our rules are
a bit inconsistent:

  1. For v0, we require that the content-type indicates a smart-http
     response. We also require the response to look vaguely like a
     pkt-line starting with "#". If one of those does not match, we fall
     back to dumb-http.

     But according to our http protocol spec[1]:

       Dumb servers MUST NOT return a return type starting with
       `application/x-git-`.

     If we see the expected content-type, we should consider it
     smart-http. At that point we can parse the pkt-line for real, and
     complain if it is not syntactically valid.

  2. For v2, we do not actually check the content-type. Our v2 protocol
     spec says[2]:

       When using the http:// or https:// transport a client makes a
       "smart" info/refs request as described in `http-protocol.txt`[...]

     and the http spec is clear that for a smart-http response[3]:

       The Content-Type MUST be `application/x-$servicename-advertisement`.

     So it is required according to the spec.

These inconsistencies were easy to miss because of the way the original
code was written as an inline conditional. Let's pull it out into its
own function for readability, and improve a few things:

 - we now predicate the smart/dumb decision entirely on the presence of
   the correct content-type

 - we do a real pkt-line parse before deciding how to proceed (and die
   if it isn't valid)

 - use skip_prefix() for comparing service strings, instead of
   constructing expected output in a strbuf; this avoids dealing with
   memory cleanup

Note that this _is_ tightening what the client will allow. It's all
according to the spec, but it's possible that other implementations
might violate these. However, violating these particular rules seems
like an odd choice for a server to make.

[1] Documentation/technical/http-protocol.txt, l. 166-167
[2] Documentation/technical/protocol-v2.txt, l. 63-64
[3] Documentation/technical/http-protocol.txt, l. 247

Helped-by: Josh Steadmon <steadmon@google.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-06 12:20:19 -08:00
Junio C Hamano
5f8b86db94 Merge branch 'jt/fetch-v2-sideband'
"git fetch" and "git upload-pack" learned to send all exchange over
the sideband channel while talking the v2 protocol.

* jt/fetch-v2-sideband:
  tests: define GIT_TEST_SIDEBAND_ALL
  {fetch,upload}-pack: sideband v2 fetch response
  sideband: reverse its dependency on pkt-line
  pkt-line: introduce struct packet_writer
  pack-protocol.txt: accept error packets in any context
  Use packet_reader instead of packet_read_line
2019-02-05 14:26:11 -08:00
Junio C Hamano
99c0bdd09d Merge branch 'ms/http-no-more-failonerror'
Debugging help for http transport.

* ms/http-no-more-failonerror:
  test: test GIT_CURL_VERBOSE=1 shows an error
  remote-curl: unset CURLOPT_FAILONERROR
  remote-curl: define struct for CURLOPT_WRITEFUNCTION
  http: enable keep_error for HTTP requests
  http: support file handles for HTTP_KEEP_ERROR
2019-01-29 12:47:55 -08:00
Masaya Suzuki
b79bdd8c12 remote-curl: unset CURLOPT_FAILONERROR
By not setting CURLOPT_FAILONERROR, curl parses the HTTP response
headers even if the response is an error. This makes GIT_CURL_VERBOSE to
show the HTTP headers, which is useful for debugging.

Signed-off-by: Masaya Suzuki <masayasuzuki@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-10 15:00:56 -08:00
Masaya Suzuki
cf2fb92b00 remote-curl: define struct for CURLOPT_WRITEFUNCTION
In order to pass more values for rpc_in, define a struct and pass it as
an additional value.

Signed-off-by: Masaya Suzuki <masayasuzuki@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-10 15:00:56 -08:00
Masaya Suzuki
e6cf87b12d http: enable keep_error for HTTP requests
curl stops parsing a response when it sees a bad HTTP status code and it
has CURLOPT_FAILONERROR set. This prevents GIT_CURL_VERBOSE to show HTTP
headers on error.

keep_error is an option to receive the HTTP response body for those
error responses. By enabling this option, curl will process the HTTP
response headers, and they're shown if GIT_CURL_VERBOSE is set.

Signed-off-by: Masaya Suzuki <masayasuzuki@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-10 15:00:56 -08:00
Masaya Suzuki
2d103c31c2 pack-protocol.txt: accept error packets in any context
In the Git pack protocol definition, an error packet may appear only in
a certain context. However, servers can face a runtime error (e.g. I/O
error) at an arbitrary timing. This patch changes the protocol to allow
an error packet to be sent instead of any packet.

Without this protocol spec change, when a server cannot process a
request, there's no way to tell that to a client. Since the server
cannot produce a valid response, it would be forced to cut a connection
without telling why. With this protocol spec change, the server can be
more gentle in this situation. An old client may see these error packets
as an unexpected packet, but this is not worse than having an unexpected
EOF.

Following this protocol spec change, the error packet handling code is
moved to pkt-line.c. Implementation wise, this implementation uses
pkt-line to communicate with a subprocess. Since this is not a part of
Git protocol, it's possible that a packet that is not supposed to be an
error packet is mistakenly parsed as an error packet. This error packet
handling is enabled only for the Git pack protocol parsing code
considering this.

Signed-off-by: Masaya Suzuki <masayasuzuki@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-02 13:05:30 -08:00