The xdl_prepare_env() function may initialise an xdlclassifier_t
data structure via xdl_init_classifier(), which allocates memory
to several fields, for example 'rchash', 'rcrecs' and 'ncha'.
If this function later exits due to the failure of xdl_optimize_ctxs(),
then this xdlclassifier_t structure, and the memory allocated to it,
is not cleaned up.
In order to fix the memory leak, insert a call to xdl_free_classifier()
before returning.
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit 307ab20b3 ("xdiff: PATIENCE/HISTOGRAM are not independent option
bits", 19-02-2012) introduced the XDF_DIFF_ALG() macro to access the
flag bits used to represent the diff algorithm requested. In addition,
code which had used explicit manipulation of the flag bits was changed
to use the macros.
However, one example of direct manipulation remains. Update this code to
use the XDF_DIFF_ALG() macro.
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The previous commit enforces MAX_XDIFF_SIZE at the
interfaces to xdiff: xdi_diff (which calls xdl_diff) and
ll_xdl_merge (which calls xdl_merge).
But we have another direct call to xdl_merge in
merge-file.c. If it were written today, this probably would
just use the ll_merge machinery. But it predates that code,
and uses slightly different options to xdl_merge (e.g.,
ZEALOUS_ALNUM).
We could try to abstract out an xdi_merge to match the
existing xdi_diff, but even that is difficult. Rather than
simply report error, we try to treat large files as binary,
and that distinction would happen outside of xdi_merge.
The simplest fix is to just replicate the MAX_XDIFF_SIZE
check in merge-file.c.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The xdiff code is not prepared to handle extremely large
files. It uses "int" in many places, which can overflow if
we have a very large number of lines or even bytes in our
input files. This can cause us to produce incorrect diffs,
with no indication that the output is wrong. Or worse, we
may even underallocate a buffer whose size is the result of
an overflowing addition.
We're much better off to tell the user that we cannot diff
or merge such a large file. This patch covers both cases,
but in slightly different ways:
1. For merging, we notice the large file and cleanly fall
back to a binary merge (which is effectively "we cannot
merge this").
2. For diffing, we make the binary/text distinction much
earlier, and in many different places. For this case,
we'll use the xdi_diff as our choke point, and reject
any diff there before it hits the xdiff code.
This means in most cases we'll die() immediately after.
That's not ideal, but in practice we shouldn't
generally hit this code path unless the user is trying
to do something tricky. We already consider files
larger than core.bigfilethreshold to be binary, so this
code would only kick in when that is circumvented
(either by bumping that value, or by using a
.gitattribute to mark a file as diffable).
In other words, we can avoid being "nice" here, because
there is already nice code that tries to do the right
thing. We are adding the suspenders to the nice code's
belt, so notice when it has been worked around (both to
protect the user from malicious inputs, and because it
is better to die() than generate bogus output).
The maximum size was chosen after experimenting with feeding
large files to the xdiff code. It's just under a gigabyte,
which leaves room for two obvious cases:
- a diff3 merge conflict result on files of maximum size X
could be 3*X plus the size of the markers, which would
still be only about 3G, which fits in a 32-bit int.
- some of the diff code allocates arrays of one int per
record. Even if each file consists only of blank lines,
then a file smaller than 1G will have fewer than 1G
records, and therefore the int array will fit in 4G.
Since the limit is arbitrary anyway, I chose to go under a
gigabyte, to leave a safety margin (e.g., we would not want
to overflow by allocating "(records + 1) * sizeof(int)" or
similar.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we call into xdiff to perform a diff, we generally lose
the return code completely. Typically by ignoring the return
of our xdi_diff wrapper, but sometimes we even propagate
that return value up and then ignore it later. This can
lead to us silently producing incorrect diffs (e.g., "git
log" might produce no output at all, not even a diff header,
for a content-level diff).
In practice this does not happen very often, because the
typical reason for xdiff to report failure is that it
malloc() failed (it uses straight malloc, and not our
xmalloc wrapper). But it could also happen when xdiff
triggers one our callbacks, which returns an error (e.g.,
outf() in builtin/rerere.c tries to report a write failure
in this way). And the next patch also plans to add more
failure modes.
Let's notice an error return from xdiff and react
appropriately. In most of the diff.c code, we can simply
die(), which matches the surrounding code (e.g., that is
what we do if we fail to load a file for diffing in the
first place). This is not that elegant, but we are probably
better off dying to let the user know there was a problem,
rather than simply generating bogus output.
We could also just die() directly in xdi_diff, but the
callers typically have a bit more context, and can provide a
better message (and if we do later decide to pass errors up,
we're one step closer to doing so).
There is one interesting case, which is in diff_grep(). Here
if we cannot generate the diff, there is nothing to match,
and we silently return "no hits". This is actually what the
existing code does already, but we make it a little more
explicit.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
By default, libcurl will follow circular http redirects
forever. Let's put a cap on this so that somebody who can
trigger an automated fetch of an arbitrary repository (e.g.,
for CI) cannot convince git to loop infinitely.
The value chosen is 20, which is the same default that
Firefox uses.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Previously, libcurl would follow redirection to any protocol
it was compiled for support with. This is desirable to allow
redirection from HTTP to HTTPS. However, it would even
successfully allow redirection from HTTP to SFTP, a protocol
that git does not otherwise support at all. Furthermore
git's new protocol-whitelisting could be bypassed by
following a redirect within the remote helper, as it was
only enforced at transport selection time.
This patch limits redirects within libcurl to HTTP, HTTPS,
FTP and FTPS. If there is a protocol-whitelist present, this
list is limited to those also allowed by the whitelist. As
redirection happens from within libcurl, it is impossible
for an HTTP redirect to a protocol implemented within
another remote helper.
When the curl version git was compiled with is too old to
support restrictions on protocol redirection, we warn the
user if GIT_ALLOW_PROTOCOL restrictions were requested. This
is a little inaccurate, as even without that variable in the
environment, we would still restrict SFTP, etc, and we do
not warn in that case. But anything else means we would
literally warn every time git accesses an http remote.
This commit includes a test, but it is not as robust as we
would hope. It redirects an http request to ftp, and checks
that curl complained about the protocol, which means that we
are relying on curl's specific error message to know what
happened. Ideally we would redirect to a working ftp server
and confirm that we can clone without protocol restrictions,
and not with them. But we do not have a portable way of
providing an ftp server, nor any other protocol that curl
supports (https is the closest, but we would have to deal
with certificates).
[jk: added test and version warning]
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The current callers only want to die when their transport is
prohibited. But future callers want to query the mechanism
without dying.
Let's break out a few query functions, and also save the
results in a static list so we don't have to re-parse for
each query.
Based-on-a-patch-by: Blake Burkhart <bburky@bburky.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Some protocols (like git-remote-ext) can execute arbitrary
code found in the URL. The URLs that submodules use may come
from arbitrary sources (e.g., .gitmodules files in a remote
repository). Let's restrict submodules to fetching from a
known-good subset of protocols.
Note that we apply this restriction to all submodule
commands, whether the URL comes from .gitmodules or not.
This is more restrictive than we need to be; for example, in
the tests we run:
git submodule add ext::...
which should be trusted, as the URL comes directly from the
command line provided by the user. But doing it this way is
simpler, and makes it much less likely that we would miss a
case. And since such protocols should be an exception
(especially because nobody who clones from them will be able
to update the submodules!), it's not likely to inconvenience
anyone in practice.
Reported-by: Blake Burkhart <bburky@bburky.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If we are cloning an untrusted remote repository into a
sandbox, we may also want to fetch remote submodules in
order to get the complete view as intended by the other
side. However, that opens us up to attacks where a malicious
user gets us to clone something they would not otherwise
have access to (this is not necessarily a problem by itself,
but we may then act on the cloned contents in a way that
exposes them to the attacker).
Ideally such a setup would sandbox git entirely away from
high-value items, but this is not always practical or easy
to set up (e.g., OS network controls may block multiple
protocols, and we would want to enable some but not others).
We can help this case by providing a way to restrict
particular protocols. We use a whitelist in the environment.
This is more annoying to set up than a blacklist, but
defaults to safety if the set of protocols git supports
grows). If no whitelist is specified, we continue to default
to allowing all protocols (this is an "unsafe" default, but
since the minority of users will want this sandboxing
effect, it is the only sensible one).
A note on the tests: ideally these would all be in a single
test file, but the git-daemon and httpd test infrastructure
is an all-or-nothing proposition rather than a test-by-test
prerequisite. By putting them all together, we would be
unable to test the file-local code on machines without
apache.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we show "branch@{0}", we format into a fixed-size
buffer using sprintf. This can overflow if you have long
branch names. We can fix it by using a temporary strbuf.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This function assumes that the relative_base path passed
into it is no larger than PATH_MAX, and writes into a
fixed-size buffer. However, this path may not have actually
come from the filesystem; for example, add_submodule_odb
generates a path using a strbuf and passes it in. This is
hard to trigger in practice, though, because the long
submodule directory would have to exist on disk before we
would try to open its info/alternates file.
We can easily avoid the bug, though, by simply creating the
filename on the heap.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we are loading a notes tree into our internal hash
table, we also collect any files that are clearly non-notes.
We format the name of the file into a PATH_MAX buffer, but
unlike true notes (which cannot be larger than a fanned-out
sha1 hash), these tree entries can be arbitrarily long,
overflowing our buffer.
We can fix this by switching to a strbuf. It doesn't even
cost us an extra allocation, as we can simply hand ownership
of the buffer over to the non-note struct.
This is of moderate security interest, as you might fetch
notes trees from an untrusted remote. However, we do not do
so by default, so you would have to manually fetch into the
notes namespace.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When unpack-trees wants to know whether a path will
overwrite anything in the working tree, we use lstat() to
see if there is anything there. But if we are going to write
"foo/bar", we can't just lstat("foo/bar"); we need to look
for leading prefixes (e.g., "foo"). So we use the lstat cache
to find the length of the leading prefix, and copy the
filename up to that length into a temporary buffer (since
the original name is const, we cannot just stick a NUL in
it).
The copy we make goes into a PATH_MAX-sized buffer, which
will overflow if the prefix is longer than PATH_MAX. How
this happens is a little tricky, since in theory PATH_MAX is
the biggest path we will have read from the filesystem. But
this can happen if:
- the compiled-in PATH_MAX does not accurately reflect
what the filesystem is capable of
- the leading prefix is not _quite_ what is on disk; it
contains the next element from the name we are checking.
So if we want to write "aaa/bbb/ccc/ddd" and "aaa/bbb"
exists, the prefix of interest is "aaa/bbb/ccc". If
"aaa/bbb" approaches PATH_MAX, then "ccc" can overflow
it.
So this can be triggered, but it's hard to do. In
particular, you cannot just "git clone" a bogus repo. The
verify_absent checks happen before unpack-trees writes
anything to the filesystem, so there are never any leading
prefixes during the initial checkout, and the bug doesn't
trigger. And by definition, these files are larger than
PATH_MAX, so writing them will fail, and clone will
complain (though it may write a partial path, which will
cause a subsequent "git checkout" to hit the bug).
We can fix it by creating the temporary path on the heap.
The extra malloc overhead is not important, as we are
already making at least one stat() call (and probably more
for the prefix discovery).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Abandoning an already applied change in "git rebase -i" with
"--continue" left CHERRY_PICK_HEAD and confused later steps.
* js/rebase-i-clean-up-upon-continue-to-skip:
rebase -i: do not leave a CHERRY_PICK_HEAD file behind
t3404: demonstrate CHERRY_PICK_HEAD bug
Various fixes around "git am" that applies a patch to a history
that is not there yet.
* pt/am-abort-fix:
am --abort: keep unrelated commits on unborn branch
am --abort: support aborting to unborn branch
am --abort: revert changes introduced by failed 3way merge
am --skip: support skipping while on unborn branch
am -3: support 3way merge on unborn branch
am --skip: revert changes introduced by failed 3way merge
"git for-each-ref" reported "missing object" for 0{40} when it
encounters a broken ref. The lack of object whose name is 0{40} is
not the problem; the ref being broken is.
* mh/reporting-broken-refs-from-for-each-ref:
read_loose_refs(): treat NULL_SHA1 loose references as broken
read_loose_refs(): simplify function logic
for-each-ref: report broken references correctly
t6301: new tests of for-each-ref error handling
"git commit --cleanup=scissors" was not careful enough to protect
against getting fooled by a line that looked like scissors.
* sg/commit-cleanup-scissors:
commit: cope with scissors lines in commit message
A fix to a minor regression to "git fsck" in v2.2 era that started
complaining about a body-less tag object when it lacks a separator
empty line after its header to separate it with a non-existent body.
* jc/fsck-retire-require-eoh:
fsck: it is OK for a tag and a commit to lack the body
We used to ask libCURL to use the most secure authentication method
available when talking to an HTTP proxy only when we were told to
talk to one via configuration variables. We now ask libCURL to
always use the most secure authentication method, because the user
can tell libCURL to use an HTTP proxy via an environment variable
without using configuration variables.
* et/http-proxyauth:
http: always use any proxy auth method available
When you say "!<ENTER>" while running say "git log", you'd confuse
yourself in the resulting shell, that may look as if you took
control back to the original shell you spawned "git log" from but
that isn't what is happening. To that new shell, we leaked
GIT_PAGER_IN_USE environment variable that was meant as a local
communication between the original "Git" and subprocesses that was
spawned by it after we launched the pager, which caused many
"interesting" things to happen, e.g. "git diff | cat" still paints
its output in color by default.
Stop leaking that environment variable to the pager's half of the
fork; we only need it on "Git" side when we spawn the pager.
* jc/unexport-git-pager-in-use-in-pager:
pager: do not leak "GIT_PAGER_IN_USE" to the pager
"git config" failed to update the configuration file when the
underlying filesystem is incapable of renaming a file that is still
open.
* kb/config-unmap-before-renaming:
config.c: fix writing config files on Windows network shares
A minor bugfix when pack bitmap is used with "rev-list --count".
* jk/rev-list-no-bitmap-while-pruning:
rev-list: disable --use-bitmap-index when pruning commits
An ancient test framework enhancement to allow color was not
entirely correct; this makes it work even when tput needs to read
from the ~/.terminfo under the user's real HOME directory.
* rh/test-color-avoid-terminfo-in-original-home:
test-lib.sh: fix color support when tput needs ~/.terminfo
Revert "test-lib.sh: do tests for color support after changing HOME"
"git rebase" did not exit with failure when format-patch it invoked
failed for whatever reason.
* cb/rebase-am-exit-code:
rebase: return non-zero error code if format-patch fails
Disable "have we lost a race with competing repack?" check while
receiving a huge object transfer that runs index-pack.
* jk/index-pack-reduce-recheck:
index-pack: avoid excessive re-reading of pack directory