Our packed_commit_list is an array of pointers to commit structs. We use
"int" for the allocation, which is 32-bit even on 64-bit platforms. This
isn't likely to overflow in practice (we're writing commit graphs, so
you'd need to actually have billions of unique commits in the
repository). But it's good practice to use size_t for allocations.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Our custom packed_oid_list data structure is really just an oid_array in
disguise. Let's switch to using the generic structure, which shortens
and simplifies the code slightly.
There's one slightly awkward part: in the old code we copied a hash
straight from the mmap'd on-disk data into the final object_id. And now
we'll copy to a temporary oid, which we'll then pass to
oid_array_append(). But this is an operation we have to do all over the
commit-graph code already, since it mostly uses object_id structs
internally. I also measured "git commit-graph --append", which triggers
this code path, and it showed no difference.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When writing a commit graph, we collect a list of object ids in an
array, which we'll eventually copy into an array of "struct commit"
pointers. Before we do that, though, we count the number of distinct
commit entries. There's a subtle bug in this step, though.
We eliminate not only duplicate oids, but also in split mode, any oids
which are not commits or which are already in a graph file. However, the
loop starts at index 1, always counting index 0 as distinct. And indeed
it can't be a duplicate, since we check for those by comparing against
the previous entry, and there isn't one for index 0. But it could be a
commit that's already in a graph file, and we'd overcount the number of
commits by 1 in that case.
That turns out not to be a problem, though. The only things we do with
the count are:
- check if our count will overflow our data structures. But the limit
there is 2^31 commits, so while this is a useful check, the
off-by-one is not likely to matter.
- pre-allocate the array of commit pointers. But over-allocating by
one isn't a problem; we'll just waste a few extra bytes.
The bug would be easy enough to fix, but we can observe that neither of
those steps is necessary.
After building the actual commit array, we'll likewise check its count
for overflow. So the extra check of the distinct commit count here is
redundant.
And likewise we use ALLOC_GROW() when building the commit array, so
there's no need to preallocate it (it's possible that doing so is
slightly more efficient, but if we care we can just optimistically
allocate one slot for each oid; I didn't bother here).
So count_distinct_commits() isn't doing anything useful. Let's just get
rid of that step.
Note that a side effect of the function was that we sorted the list of
oids, which we do rely on in copy_oids_to_commits(), since it must also
skip the duplicates. So we'll move the qsort there. I didn't copy the
"TODO" about adding more progress meters. It's actually quite hard to
make a repository large enough for this qsort would take an appreciable
amount of time, so this doesn't seem like a useful note.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We provide oid_array_for_each_unique() for iterating over the
de-duplicated items in an array. But it's awkward to use for two
reasons:
1. It uses a callback, which means marshaling arguments into a struct
and passing it to the callback with a void parameter.
2. The callback doesn't know the numeric index of the oid we're
looking at. This is useful for things like progress meters.
Iterating with a for-loop is much more natural for some cases, but the
caller has to do the de-duping itself. However, we can provide a small
helper to make this easier (see the docstring in the header for an
example use).
The caller does have to remember to sort the array first. We could add
an assertion into the helper that array->sorted is set, but I didn't
want to complicate what is otherwise a pretty fast code path.
I also considered adding a full iterator type with init/next/end
functions (similar to what we have for hashmaps). But it ended up making
the callers much harder to read. This version keeps us close to a basic
for-loop.
Yet another option would be adding an option to sort the array and
compact out the duplicates. This would mean iterating over the array an
extra time, though that's probably not a big deal (we did just do an
O(n log n) sort). But we'd still have to write a for-loop to iterate, so
it doesn't really make anything easier for the caller.
No new test, since we'll convert the callback iterator (which is covered
by t0064, among other callers) to use the new code.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Our test suite currently only passes when `git init` uses the name
`master` for the initial branch. This would stop us from changing the
default branch name.
Let's adjust t6300 so that it does not rely on any specific default
branch name. This trick is done by (force-)renaming the initial branch
to the name `main` in the `setup` and the `:remotename and :remoteref`
test cases, and then replacing all mentions of `master` and `MASTER`
with `main` and `MAIN`, respectively.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We sort the oid-array as a side effect of calling the lookup or
unique-iteration functions. But callers may want to sort it themselves
(especially as we add new iteration options in future patches).
We'll also move the check of the "sorted" flag into the sort function,
so callers don't have to remember to check it.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We define git_hash_algo and object_id in hash.h, but most of the utility
functions are declared in the main cache.h. Let's move them to hash.h
along with their struct definitions. This cleans up cache.h a bit, but
also avoids circular dependencies when other headers need to know about
these functions (e.g., if oid-array.h were to have an inline that used
oideq(), it couldn't include cache.h because it is itself included by
cache.h).
No including C files should be affected, because hash.h is always
included in cache.h already.
We do have to mention repository.h at the top of hash.h, though, since
we depend on the_repository in some of our inline functions.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Our tests for handling duplicates in oid-array provide only a single
duplicate for each number, so our sorted array looks like:
44 44 55 55 88 88 aa aa
A slightly more interesting test is to have multiple duplicates, which
makes sure that we not only skip the duplicate, but keep skipping until
we are out of the set of matching duplicates.
Unsurprisingly this works just fine, but it's worth beefing up this test
since we're about to change the duplicate-detection code.
Note that we do need to adjust the results on the lookup test, since it
is returning the index of the found item (and now we have more items
before our range, and the range itself is slightly larger, since we'll
accept a match of any element).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The data type is an oid_array these days, and we are using "test-tool
oid-array", so let's name the test script appropriately.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When this file was moved from sha1-array.h, we forgot to update the
preprocessor header guard to match the new name.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In 0a21d0e089 (Makefile: mark git-maintenance as a builtin,
2020-12-01), we marked git-maintenance as a builtin in the Makefile, but
forgot to do the same in `CMakeLists.txt`.
Rather than always play catch-up and adjust `git_builtin_extra`
manually, use the `BUILT_INS` definitions in the Makefile as
authoritative source and generate `git_builtin_extra` dynamically.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Initially, we started converting this test script in anticipation for
renaming the default branch name to `main`. To that end, we partially
converted it to accommodate for that default branch name, marking the
now-failing test cases with a prereq that was designed to be fulfilled
once the rename was complete.
However, the effort to move to the branch name `main` needs quite a bit
longer, as it was decided that we need a deprecation phase first.
To avoid keeping t5526 in limbo for such a long time, we just made it
independent of the actual default branch name used by Git. Therefore,
that prereq is no longer necessary, and we can drop it.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
While at it, use different default branch names for the three different
repositories involved in the test script: this makes it easier to debug
failures, too (otherwise you have to wonder which `master` branch was
meant: the super project's? The submodule's? The nested submodule's?).
Note: this touches code that was originally modified to prepare for
renaming the default branch name to `main`. This patch side-steps that
effort completely by overriding the initial branch name explicitly.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Something changed in `vcpkg` (which we use in our Visual C++ build to
provide the dependencies such as libcurl) and our `vs-build` job started
failing in CI. The reason is that we had a work-around in place to help
CMake find iconv, and this work-around is neither needed nor does it
work anymore.
For the full discussion with the vcpkg project, see this comment:
https://github.com/microsoft/vcpkg/issues/14780#issuecomment-735368280
Signed-off-by: Dennis Ameling <dennis@dennisameling.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
To keep track of which object filters are allowed or not, 'git
upload-pack' stores the name of each filter in a string_list, and sets
it ->util pointer to be either 0 or 1, indicating whether it is banned
or allowed.
Later on, we attempt to clear that list, but we incorrectly ask for the
util pointers to be free()'d, too. This behavior (introduced back in
6dd3456a8c (upload-pack.c: allow banning certain object filter(s),
2020-08-03)) leads to an invalid free, and causes us to crash.
In order to trigger this, one needs to fetch from a server that (a) has
at least one object filter allowed, and (b) issue a fetch that contains
a subset of the allowed filters (i.e., we cannot ask for a banned
filter, since this causes us to die() before we hit the bogus
string_list_clear()).
In that case, whatever banned filters exist will cause a noop free()
(since those ->util pointers are set to 0), but the first allowed filter
we try to free will crash us.
We never noticed this in the tests because we didn't have an example of
setting 'uploadPackFilter' configuration variables and then following up
with a valid fetch. The first new 'git clone' prevents further
regression here. For good measure on top, add a test which checks the
same behavior at a tree depth greater than 0.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If 'git clone' couldn't execute 'transport_fetch_refs()' (e.g., because
of an error on the remote's side in 'git upload-pack'), then it will
silently ignore it.
Even though this has been the case at least since clone was ported to C
(way back in 8434c2f1af (Build in clone, 2008-04-27)), 'git fetch'
doesn't ignore these and reports any failures it sees.
That suggests that ignoring the return value in 'git clone' is simply an
oversight that should be corrected. That's exactly what this patch does.
(Noticing and fixing this is no coincidence, we'll want it in the next
patch in order to demonstrate a regression in 'git upload-pack' via a
'git clone'.)
There's no additional logging here, but that matches how 'git fetch'
handles the same case. An assumption there is that whichever part of
transport_fetch_refs() fails will complain loudly, so any additional
logging here is redundant.
Co-authored-by: Jeff King <peff@peff.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit 6dc905d974 (config: split repo scope to local and worktree,
2020-02-10) made some "if" statements multiline, but didn't indent the
second lines in our usual way.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If we encounter an error in parse_filter_object_config(), we'll complain
to stderr but won't actually propagate the return value up the stack.
This is unlike most of our config callbacks, which return the error to
git_config() so it can die (this includes the call just below us to
parse_hide_refs_config(), which can also produce errors).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
An earlier attempt to fix "git fetch --recurse-submodules" broke
another use case; revert it until a better fix is found.
* pk/subsub-fetch-fix:
Revert "submodules: fix of regression on fetching of non-init subsub-repo"
"git fetch" that is killed may leave a pack-objects process behind,
still computing to find a good compression, wasting cycles. This
has been corrected.
* jk/stop-pack-objects-when-fetch-is-killed:
upload-pack: kill pack-objects helper on signal or exit
"git push" that is killed may leave a pack-objects process behind,
still computing to find a good compression, wasting cycles. This
has been corrected.
* jk/stop-pack-objects-when-push-is-killed:
send-pack: kill pack-objects helper on signal or exit
Simplify the logic to deal with a repack operation that ended up
creating the same packfile.
* tb/repack-simplify:
builtin/repack.c: don't move existing packs out of the way
builtin/repack.c: keep track of what pack-objects wrote
repack: make "exts" array available outside cmd_repack()
"git pull --rebase --recurse-submodules" checked for local changes
in a wrong range and failed to run correctly when it should.
* pb/pull-rebase-recurse-submodules:
pull: check for local submodule modifications with the right range
t5572: describe '--rebase' tests a little more
t5572: add notes on a peculiar test
pull --rebase: compute rebase arguments in separate function
"git-parse-remote" shell script library outlived its usefulness.
* ab/retire-parse-remote:
submodule: fix fetch_in_submodule logic
parse-remote: remove this now-unused library
submodule: remove sh function in favor of helper
submodule: use "fetch" logic instead of custom remote discovery
Versions of docbook-xsl newer than 1.79.1 allows xsltproc to assign
IDs to nodes in the generated HTML consistently, to make the output
resulting from the same source stable and reproducible.
Pass the generate.consistent.ids parameter from the command line to
ask for this feature. Older versions of the tool simply ignores the
parameter and produces their output the same way as before this
change, so there is no need to check for toolchain version.
Signed-off-by: Arnout Engelen <arnout@bzzt.net>
Helped-by: brian m. carlson <sandals@crustytoothpaste.net>
Helped-by: Todd Zullinger <tmz@pobox.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This reverts commit 1b7ac4e6d4d490b224f5206af7418ed74e490608; in
<CAN0XMOLiS_8JZKF_wW70BvRRxkDHyUoa=Z3ODtB_Bd6f5Y=7JQ@mail.gmail.com>,
Ralf Thielow reports that "git fetch" with submodule.recurse set can
result in a bogus and infinitely recursive fetching of the same
submodule.
The old phrasing is at least questionable, if not wrong, as there are
a lot of branches out there that didn't see active development for
years, yet they are still branches, ready to become active again any
time.
Signed-off-by: Sergey Organov <sorganov@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We normally get the list of builtin commands by expanding BUILTIN_OBJS.
But for commands which are embedded inside another's source file (e.g.,
cmd_show() in builtin/log.c), the Makefile needs to be told explicitly
about them.
Since cmd_maintenance() is inside buitin/gc.c, it should be listed
explicitly in the BUILT_INS list in the Makefile. Not doing so isn't
_too_ tragic, as it simply means we will not make a git-maintenance
symlink in libexec/git-core. Since we encourage people to use the "git
foo" form, even in scripts which have put libexec into their PATH,
nobody seems to have noticed.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
core.sharedRepository defines which permissions Git should set when
creating files in $GIT_DIR, so that the repository may be shared with
other users. But (in its current form) the setting shouldn't affect how
files are created in the working tree. This is not respected by apply
and am (which uses apply), when creating leading directories:
$ cat d.patch
diff --git a/d/f b/d/f
new file mode 100644
index 0000000..e69de29
Apply without the setting:
$ umask 0077
$ git apply d.patch
$ ls -ld d
drwx------
Apply with the setting:
$ umask 0077
$ git -c core.sharedRepository=0770 apply d.patch
$ ls -ld d
drwxrws---
Only the leading directories are affected. That's because they are
created with safe_create_leading_directories(), which calls
adjust_shared_perm() to set the directories' permissions based on
core.sharedRepository. To fix that, let's introduce a variant of this
function that ignores the setting, and use it in apply. Also add a
regression test and a note in the function documentation about the use
of each variant according to the destination (working tree or git
dir).
Signed-off-by: Matheus Tavares <matheus.bernardino@usp.br>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The ctime_r() and asctime_r() functions are reentrant, but have
no check that the buffer we pass in is long enough (the manpage says it
"should have room for at least 26 bytes"). Since this is such an
easy-to-get-wrong interface, and since we have the much safer strftime()
as well as its more convenient strbuf_addftime() wrapper, let's ban both
of those.
Signed-off-by: Jeff King <peff@peff.net>
Reviewed-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
b7ce24d095 (Turn `git serve` into a test helper, 2019-04-18) demoted git
serve from a builtin command to a test helper. As a result the
git-serve binary is no longer built and thus doesn't have to be ignored
anymore.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This was accidentally added by e00cf070a4 (git-sh-i18n.sh: add no-op
gettext() and eval_gettext() wrappers, 2011-05-14), even though an
earlier commit in the same series had already done so.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A test marked with EXPENSIVE creates two 2.5GB files and adds them to
the repository. This takes 194s to run on my machine, versus 2s when the
EXPENSIVE prereq isn't set. We can trim this down a bit by doing two
things:
- use "git commit --quiet" to avoid spending time generating a diff
summary (this actually only helps for the second commit, but I've
added it here to both for consistency). This shaves off 8s.
- set core.compression to 0. We know these files are full of random
bytes, and so won't compress (that's the point of the test!).
Spending cycles on zlib is pointless. This shaves off 122s.
After this, my total time to run the script is 64s. That won't help
normal runs without GIT_TEST_LONG set, of course, but it's easy enough
to do.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
sparse-checkouts are built on the patterns in the
$GIT_DIR/info/sparse-checkout file, where commands have modified
behavior for paths that do not match those patterns. The differences in
behavior, as far as the bugs concerned here, fall into three different
categories (with git subcommands that fall into each category listed):
* commands that only look at files matching the patterns:
* status
* diff
* clean
* update-index
* commands that remove files from the working tree that do not match
the patterns, and restore files that do match them:
* read-tree
* switch
* checkout
* reset (--hard)
* commands that omit writing files to the working tree that do not
match the patterns, unless those files are not clean:
* merge
* rebase
* cherry-pick
* revert
There are some caveats above, e.g. a plain `git diff` ignores files
outside the sparsity patterns but will show diffs for paths outside the
sparsity patterns when revision arguments are passed. (Technically,
diff is treating the sparse paths as matching HEAD.) So, there is some
internal inconsistency among these commands. There are also additional
commands that should behave differently in the face of sparse-checkouts,
as the sparse-checkout documentation alludes to, but the above is
sufficient for me to explain how `git stash` is affected.
What is relevant here is that logically 'stash' should behave like a
merge; it three-way merges the changes the user had in progress at stash
creation time, the HEAD at the time the stash was created, and the
current HEAD, in order to get the stashed changes applied to the current
branch. However, this simplistic view doesn't quite work in practice,
because stash tweaks it a bit due to two factors: (1) flags like
--keep-index and --include-untracked (why we used two different verbs,
'keep' and 'include', is a rant for another day) modify what should be
staged at the end and include more things that should be quasi-merged,
(2) stash generally wants changes to NOT be staged. It only provides
exceptions when (a) some of the changes had conflicts and thus we want
to use stages to denote the clean merges and higher order stages to
mark the conflicts, or (b) if there is a brand new file we don't want
it to become untracked.
stash has traditionally gotten this special behavior by first doing a
merge, and then when it's clean, applying a pipeline of commands to
modify the result. This series of commands for
unstaging-non-newly-added-files came from the following commands:
git diff-index --cached --name-only --diff-filter=A $CTREE >"$a"
git read-tree --reset $CTREE
git update-index --add --stdin <"$a"
rm -f "$a"
Looking back at the different types of special sparsity handling listed
at the beginning of this message, you may note that we have at least one
of each type covered here: merge, diff-index, and read-tree. The weird
mix-and-match led to 3 different bugs:
(1) If a path merged cleanly and it didn't match the sparsity patterns,
the merge backend would know to avoid writing it to the working tree and
keep the SKIP_WORKTREE bit, simply only updating it in the index.
Unfortunately, the subsequent commands would essentially undo the
changes in the index and thus simply toss the changes altogether since
there was nothing left in the working tree. This means the stash is
only partially applied.
(2) If a path existed in the worktree before `git stash apply` despite
having the SKIP_WORKTREE bit set, then the `git read-tree --reset` would
print an error message of the form
error: Entry 'modified' not uptodate. Cannot merge.
and cause stash to abort early.
(3) If there was a brand new file added by the stash, then the
diff-index command would save that pathname to the temporary file, the
read-tree --reset would remove it from the index, and the update-index
command would barf due to no such file being present in the working
copy; it would print a message of the form:
error: NEWFILE: does not exist and --remove not passed
fatal: Unable to process path NEWFILE
and then cause stash to abort early.
Basically, the whole idea of unstage-unless-brand-new requires special
care when you are dealing with a sparse-checkout. Fix these problems
by applying the following simple rule:
When we unstage files, if they have the SKIP_WORKTREE bit set,
clear that bit and write the file out to the working directory.
(*) If there's already a file present in the way, rename it first.
This fixes all three problems in t7012.13 and allows us to mark it as
passing.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When stash was converted from shell to a builtin, it merely
transliterated the forking of various git commands from shell to a C
program that would fork the same commands. Some of those were converted
over to actual library calls, but much of the pipeline-of-commands
design still remains. Fix some of this by replacing the portion
corresponding to
git diff-index --cached --name-only --diff-filter=A $CTREE >"$a"
git read-tree --reset $CTREE
git update-index --add --stdin <"$a"
rm -f "$a"
into a library function that does the same thing. (The read-tree
--reset was already partially converted over to a library call, but as
an independent piece.) Note here that this came after a merge operation
was performed. The merge machinery always stages anything that cleanly
merges, and the above code only runs if there are no conflicts. Its
purpose is to make it so that when there are no conflicts, all the
changes from the stash are unstaged. However, that causes brand new
files from the stash to become untracked, so the code above first saves
those files off and then re-adds them afterwards.
We replace the whole series of commands with a simple function that will
unstage files that are not newly added. This doesn't fix any bugs in
the usage of these commands, it simply matches the existing behavior but
makes it into a single atomic operation that we can then operate on as a
whole. A subsequent commit will take advantage of this to fix issues
with these commands in sparse-checkouts.
This conversion incidentally fixes t3906.1, because the separate
update-index process would die with the following error messages:
error: uninitialized_sub: is a directory - add files inside instead
fatal: Unable to process path uninitialized_sub
The unstaging of the directory as a submodule meant it was no longer
tracked, and thus as an uninitialized directory it could not be added
back using `git update-index --add`, thus resulting in this error and
early abort. Most of the submodule tests in 3906 continue to fail after
this change, this change was just enough to push the first of those
tests to success.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Applying stashes in sparse-checkouts, particularly when the patterns
used to define the sparseness have changed between when the stash was
created and when it is applied, has a number of bugs. The primary
problem is that stashes are sometimes only partially applied. In most
such cases, it does so silently without any warning or error being
displayed and with 0 exit status.
There are, however, a few cases when non-translated error messages are
shown and the stash application aborts early. The first is when there
are files present despite the SKIP_WORKTREE bit being set, in which case
the error message shown is:
error: Entry 'PATHNAME' not uptodate. Cannot merge.
The other situation is when a stash contains new files to add to the
working tree; in this case, the code aborts early but still has the
stash partially applied, and shows the following error message:
error: NEWFILE: does not exist and --remove not passed
fatal: Unable to process path NEWFILE
Add a test that can trigger all three of these problems. Have it
carefully check that the working copy and SKIP_WORKTREE bits are as
expected after the stash application. The test is currently marked as
expected to fail, but subsequent commits will implement the fixes and
toggle the expectation.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The traditional gmtime(), localtime(), ctime(), and asctime() functions
return pointers to shared storage. This means they're not thread-safe,
and they also run the risk of somebody holding onto the result across
multiple calls (where each call invalidates the previous result).
All callers should be using their reentrant counterparts.
Signed-off-by: Jeff King <peff@peff.net>
Reviewed-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
To generate its filename, the 'git bugreport' builtin asks the system
for the current time with 'localtime()'. Since this uses a shared
buffer, it is not thread-safe.
Even though 'git bugreport' is not multi-threaded, using localtime() can
trigger some static analysis tools to complain, and a quick
$ git grep -oh 'localtime\(_.\)\?' -- **/*.c | sort | uniq -c
shows that the only usage of the thread-unsafe 'localtime' is in a piece
of documentation.
So, convert this instance to use the thread-safe version for
consistency, and to appease some analysis tools.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We spawn an external pack-objects process to actually send objects to
the remote side. If we are killed by a signal during this process, then
pack-objects may continue to run. As soon as it starts producing output
for the pack, it will see a failure writing to upload-pack and exit
itself. But before then, it may do significant work traversing the
object graph, compressing deltas, etc, which will all be pointless. So
let's make sure to kill as soon as we know that the caller will not read
the result.
There's no test here, since it's inherently racy, but here's an easy
reproduction is on a large-ish repo like linux.git:
- make sure you don't have pack bitmaps (since they make the enumerating
phase go quickly). For linux.git it takes ~30s or so to walk the
whole graph on my machine.
- run "git clone --no-local -q . dst"; the "-q" is important because
if pack-objects is writing progress to upload-pack (to get
multiplexed over the sideband to the client), then it will notice
pretty quickly the failure to write to stderr
- kill the client-side clone process in another terminal (don't use
^C, as that will send SIGINT to all of the processes)
- run "ps au | grep git" or similar to observe upload-pack dying
within 5 seconds (it will send a keepalive that will notice the
client has gone away)
- but you'll still see pack-objects consuming 100% CPU (and 1GB+ of
RAM) during the traversal and delta compression phases. It will exit
as soon as it starts to write the pack (when it will notice that
upload-pack went away).
With this patch, pack-objects exits as soon as upload-pack does.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a checkbox in the SSH askpass helper to optionally show the input
text which is often a password.
* da/askpass-mask-checkbox:
git-gui: ssh-askpass: add a checkbox to show the input text
Hide the input text by default since the field is
commonly used for sensative informations such as passwords.
Add a "Show input" checkbox to conditionally show the input.
Helped-by: Miguel Boekhold <miguel.boekhold@osudio.com>
Signed-off-by: Efimov Vasily <laer.18@gmail.com>
Signed-off-by: David Aguilar <davvid@gmail.com>
Signed-off-by: Pratyush Yadav <me@yadavpratyush.com>
Translation is done on Transifex: https://www.transifex.com/djm00n/git-po-ru/git-gui/
If you have any corrections please report them there.
Signed-off-by: Dimitriy Ryazantcev <dimitriy.ryazantcev@gmail.com>
Signed-off-by: Pratyush Yadav <me@yadavpratyush.com>
git imap-send does not parse the default git config settings and thus ignore
core.askpass value.
Rewrite config parsing to support core settings.
Reported-by: Philippe Blain <levraiphilippeblain@gmail.com>
Signed-off-by: Nicolas Morey-Chaisemartin <nmoreychaisemartin@suse.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Teach git-gui to read the commit message template and pre-populate it in
the commit message buffer.
* ms/commit-template:
git-gui: use commit message template
git-gui: Only touch GITGUI_MSG when needed
Turns out we always need to set the ignored prefix (compset) to have
similar behavior as in default Bash.
The issue can be seen with:
git show master:<tab>
Commit 94b2901cfe wrongly removed it.
Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>