The intention is to ignore all output from the 'git stash apply' call.
Adjust the order of the redirection to ensure that both stdout and
stderr are redirected to /dev/null.
Signed-off-by: Todd Zullinger <tmz@pobox.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In 29ff1f8f74 (t: lib-gpg: flush gpg agent on startup, 2017-07-20), a
call to gpgconf was added to kill the gpg-agent. The intention was to
ignore all output from the call, but the order of the redirection needs
to be switched to ensure that both stdout and stderr are redirected to
/dev/null. Without this, gpgconf from gnupg-2.0 releases would output
'gpgconf: invalid option "--kill"' each time it was called.
Signed-off-by: Todd Zullinger <tmz@pobox.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The set of paths output from "git status --ignored" was tied
closely with its "--untracked=<mode>" option, but now it can be
controlled more flexibly. Most notably, a directory that is
ignored because it is listed to be ignored in the ignore/exclude
mechanism can be handled differently from a directory that ends up
to be ignored only because all files in it are ignored.
* jm/status-ignored-files-list:
status: test ignored modes
status: document options to show matching ignored files
status: report matching ignored and normal untracked
status: add option to show ignored files differently
If an empty string is passed to link_alt_odb_entries(), our
loop finds no entries and we link nothing. But we still do
some preparatory work to normalize the object directory
path, even though we'll never look at the result. This
triggers in basically every git process, since we feed the
usually-empty ALTERNATE_DB_ENVIRONMENT to the function.
Let's detect early that there's nothing to do and return.
While we're at it, let's treat NULL the same as an empty
string as a favor to our callers. That saves
prepare_alt_odb() from having to cover this case.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The SubmittingPatches document is often cited by outside parties as an
example of good practices to follow, including logical, independent
commits; patch sign-offs; and sending patches to a mailing list.
Currently, people who want to cite a particular section tend to either
refer to it by name and let the interested party search through the
document to find it, or link to a given line number on GitHub and hope
the file doesn't change.
Instead, convert the document to AsciiDoc. Build it as part of the
technical documentation, since it is likely of interest to the same
group of people. Provide stable links to the sections which outside
parties are likely to want to link to. Make some minor structural
changes to organize it so that it can be formatted sanely.
Since the makefile needs a .txt extension in order to build with the
rest of the documentation, simply copy the file. Ignore the temporary
file so it doesn't get checked in accidentally, and remove it as part of
the clean process. Do this instead of renaming the file so that people
who have already linked to the documentation (who we're trying to help)
don't find their links broken. Avoid symlinking since Windows will not
like that.
This allows us to render the document as part of the website for the
benefit of others who wish to link to it as well as providing a more
nicely formatted display for our community and potential contributors.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Switch the uses of empty_tree_oid and empty_blob_oid to use the
current_hash abstraction that represents the current hash algorithm in
use.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In future versions of Git, we plan to support an additional hash
algorithm. Integrate the enumeration of hash algorithms with repository
setup, and store a pointer to the enumerated data in struct repository.
Of course, we currently only support SHA-1, so hard-code this value in
read_repository_format. In the future, we'll enumerate this value from
the configuration.
Add a constant, the_hash_algo, which points to the hash_algo structure
pointer in the repository global. Note that this is the hash which is
used to serialize data to disk, not the hash which is used to display
items to the user. The transition plan anticipates that these may be
different. We can add an additional element in the future (say,
ui_hash_algo) to provide for this case.
Include repository.h in cache.h since we now need to have access to
these struct and variable definitions.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since in the future we want to support an additional hash algorithm, add
a structure that represents a hash algorithm and all the data that must
go along with it. Add a constant to allow easy enumeration of hash
algorithms. Implement function typedefs to create an abstract API that
can be used by any hash algorithm, and wrappers for the existing SHA1
functions that conform to this API.
Expose a value for hex size as well as binary size. While one will
always be twice the other, the two values are both used extremely
commonly throughout the codebase and providing both leads to improved
readability.
Don't include an entry in the hash algorithm structure for the null
object ID. As this value is all zeros, any suitably sized all-zero
object ID can be used, and there's no need to store a given one on a
per-hash basis.
The current hash function transition plan envisions a time when we will
accept input from the user that might be in SHA-1 or in the NewHash
format. Since we cannot know which the user has provided, add a
constant representing the unknown algorithm to allow us to indicate that
we must look the correct value up. Provide dummy API functions that die
in this case.
Finally, include git-compat-util.h in hash.h so that the required types
are available. This aids people using automated tools their editors.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We enumerate several different items as part of struct
repository_format, but then actually set up those values using the
global variables we've initialized from them. Instead, let's pass a
pointer to the structure down to the code where we enumerate these
values, so we can later on use those values directly to perform setup.
This technique makes it easier for us to determine additional items
about the repository format (such as the hash algorithm) and then use
them for setup later on, without needing to add additional global
variables. We can't avoid using the existing global variables since
they're intricately intertwined with how things work at the moment, but
this improves things for the future.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It was possible to invoke "git bisect run" without any command.
This considers all commits as good commits since "$@"'s return
value for empty $@ is 0.
This is most probably not what a user wants (otherwise she would
invoke "git bisect run true"), so not providing a command now
results in an error.
Signed-off-by: Stephan Beyer <s-beyer@gmx.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If you have a pcre1 library which is compiled with JIT enabled then
PCRE_STUDY_JIT_COMPILE will be defined whether or not the
NO_LIBPCRE1_JIT configuration is set.
This means that we enable JIT functionality when calling pcre_study
even if NO_LIBPCRE1_JIT has been explicitly set and we just use plain
pcre_exec later.
Fix this by using own macro (GIT_PCRE_STUDY_JIT_COMPILE) which we set to
PCRE_STUDY_JIT_COMPILE only if NO_LIBPCRE1_JIT is not set and define to
0 otherwise, as before.
Reviewed-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The test for '--abbrev' in t4201-shortlog.sh assumes that the commits
generated in the test can always be uniquely abbreviated to 5 hex digits
but this is not always the case. If you were unlucky and happened to run
the test at (say) Thu Jun 22 03:04:49 2017 +0000, you would find that
the first commit generated would collide with a tree object created
later in the same test.
This can be simulated in the version of t4201-shortlog.sh prior to this
commit by setting GIT_COMMITTER_DATE and GIT_AUTHOR_DATE to 1498100689
after sourcing test-lib.sh.
Change the test to test --abbrev=35 instead of --abbrev=5 to almost
completely avoid the possibility of a partial collision and add a call
to test_tick in the setup to make the test repeatable (the latter alone
is sufficient to make it robust enough).
Signed-off-by: Charles Bailey <cbailey32@bloomberg.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Tweak a small number of files to mention "view" as an alternative to
"visualize".
Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Simplify and speed up the process of finding the git worktree when
running on Windows by keeping it in perl and avoiding spawning helper
processes.
Signed-off-by: Ben Peart <benpeart@microsoft.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
fuzzy_matchlines() uses a pointers to the first and last characters of
two lines to keep track while matching them. This makes it impossible
to deal with empty strings. It accesses characters before the start of
empty lines. It can also access characters after the end when checking
for trailing whitespace in the main loop.
Avoid that by using pointers to the first character and the one *after*
the last one. This is well-defined as long as the latter is not
dereferenced. Basically rewrite the function based on that premise; it
becomes much simpler as a result. There is no need to check for
leading whitespace outside of the main loop anymore.
Reported-by: Mahmoud Al-Qudsi <mqudsi@neosmart.net>
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The examples and common practice for adding markers such as "RFC" or
"v2" to the subject of patch emails is to have them within the same
brackets as the "PATCH" text, not after the closing bracket. Further,
the practice of `git format-patch` and the like, as well as what appears
to be the more common pratice on the mailing list, is to use "[RFC
PATCH]", not "[PATCH/RFC]".
Update the SubmittingPatches article to match and to reference the
`format-patch` helper arguments, and also make some minor text
clarifications in the area.
Signed-off-by: Adam Dinwoodie <adam@dinwoodie.org>
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
ba1b9cac ("fsmonitor: delay updating state until after split index
is merged", 2017-10-27) resolved the problem of the fsmonitor data
being applied to the non-base index when reading; however, a similar
problem exists when writing the index. Specifically, writing of the
fsmonitor extension happens only after the work to split the index
has been applied -- as such, the information in the index is only
for the non-"base" index, and thus the extension information
contains only partial data.
When saving, compute the ewah bitmap before the index is split, and
store it in the fsmonitor_dirty field, mirroring the behavior that
occurred during reading. fsmonitor_dirty is kept from being leaked by
being freed when the extension data is written -- which always happens
precisely once, no matter the split index configuration.
Signed-off-by: Alex Vandiver <alexmv@dropbox.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Though the process has chdir'd to the root of the working tree, the
PWD environment variable is only guaranteed to be updated accordingly
if a shell is involved -- which is not guaranteed to be the case.
That is, if `/usr/bin/perl` is a binary, $ENV{PWD} is unchanged from
whatever spawned `git` -- if `/usr/bin/perl` is a trivial shell
wrapper to the real `perl`, `$ENV{PWD}` will have been updated to the
root of the working copy.
Update to read from the Cwd module using the `getcwd` syscall, not the
PWD environment variable. The Cygwin case is left unchanged, as it
necessarily _does_ go through a shell.
Signed-off-by: Alex Vandiver <alexmv@dropbox.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
MinGW updates.
* js/mingw-redirect-std-handles:
mingw: document the standard handle redirection
mingw: optionally redirect stderr/stdout via the same handle
mingw: add experimental feature to redirect standard handles
The credential helper for libsecret (in contrib/) has been improved
to allow possibly prompting the end user to unlock secrets that are
currently locked (otherwise the secrets may not be loaded).
* dk/libsecret-unlock-to-load-fix:
credential-libsecret: unlock locked secrets
Correct start-up sequence so that a repository could be placed
immediately under the root directory again (which was broken at
around Git 2.13).
* js/early-config:
setup: avoid double slashes when looking for HEAD
TravisCI build updates.
* sg/travis-fixes:
travis-ci: don't build Git for the static analysis job
travis-ci: fix running P4 and Git LFS tests in Linux build jobs
A single-word "unsigned flags" in the diff options is being split
into a structure with many bitfields.
* bw/diff-opt-impl-to-bitfields:
diff: make struct diff_flags members lowercase
diff: remove DIFF_OPT_CLR macro
diff: remove DIFF_OPT_SET macro
diff: remove DIFF_OPT_TST macro
diff: remove touched flags
diff: add flag to indicate textconv was set via cmdline
diff: convert flags to be stored in bitfields
add, reset: use DIFF_OPT_SET macro to set a diff flag
After an error from lstat(), diff_populate_filespec() function
sometimes still went ahead and used invalid data in struct stat,
which has been fixed.
* ao/diff-populate-filespec-lstat-errorpath-fix:
diff: fix lstat() error handling in diff_populate_filespec()
Description of blame.{showroot,blankboundary,showemail,date}
configuration variables have been added to "git config --help".
* sb/blame-config-doc:
config: document blame configuration
The mailing address for the FSF has changed over the years. Rather than
updating the address across all files, refer readers to gnu.org, as the
GNU GPL documentation now suggests for license notices. The mailing
address is retained in the full license files (COPYING and LGPL-2.1).
Signed-off-by: Todd Zullinger <tmz@pobox.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The mailing address for the FSF has changed over the years. Rather than
updating the address across all files, refer readers to gnu.org, as the
GNU GPL documentation now suggests for license notices. The mailing
address is retained in the full license files (COPYING and LGPL-2.1).
The old address is still present in t/diff-lib/COPYING. This is
intentional, as the file is used in tests and the contents are not
expected to change.
Signed-off-by: Todd Zullinger <tmz@pobox.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The illustrated history used to explain the `--fork-point` mode
named three keypoint commits B3, B2 and B1 from the oldest to the
newest, which was hard to read. Relabel them to B0, B1, B2. Also
illustrate the history after the rebase using the `--fork-point`
facility was made.
The text already mentions use of reflog, but the description is not
clear what benefit we are trying to gain by using reflog. Clarify
that it is to find the commits that were known to be at the tip of
the remote-tracking branch. This in turn necessitates users to know
the ramifications of the underlying assumptions, namely, expiry of
reflog entries will make it impossible to determine which commits
were at the tip of the remote-tracking branches and we fail when in
doubt (instead of giving a random and incorrect result without even
warning). Another limitation is that it won't be useful if you did
not fork from the tip of a remote-tracking branch but from in the
middle.
Describe them.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We currently have seven callers of `reduce_heads(foo)`. Six of them do
not use the original list `foo` again, and actually, all six of those
end up leaking it.
Introduce and use `reduce_heads_replace(&foo)` as a leak-free version of
`foo = reduce_heads(foo)` to fix several of these. Fix the remaining
leaks using `free_commit_list()`.
While we're here, document `reduce_heads()` and mark it as `extern`.
Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In several functions, we iterate through a commit list by assigning
`result = result->next`. As a consequence, we lose the original pointer
and eventually leak the list.
Rewrite the loops so that we keep the original pointers, then call
`free_commit_list()`. Various alternatives were considered:
1) Use `UNLEAK(result)` before the loop. Simple change, but not very
pretty. These would definitely be new lows among our usages of UNLEAK.
2) Use `pop_commit()` when looping. Slightly less simple change, but it
feels slightly preferable to first display the list, then free it.
3) As in this patch, but with `UNLEAK()` instead of freeing. We'd still
go through all the trouble of refactoring the loop, and because it's not
super-obvious that we're about to exit, let's just free the lists -- it
probably doesn't affect the runtime much.
In `handle_independent()` we can drop `result` while we're here and
reuse the `revs`-variable instead. That matches several other users of
`reduce_heads()`. The memory-leak that this hides will be addressed in
the next commit.
Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Without this, the fetch process seems hanged while we fetch page
listings across the namespaces. Obviously, it should be possible to
silence this with -q, but that's an issue already present everywhere
in the code and should be fixed separately:
https://github.com/Git-Mediawiki/Git-Mediawiki/issues/30
Signed-off-by: Antoine Beaupré <anarcat@debian.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Ideally, we'd process them in numeric order since that is more
logical, but we can't do that yet since this is where we find the
numeric identifiers in the first place. Lexicographic order is a good
compromise.
Signed-off-by: Antoine Beaupré <anarcat@debian.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we specify a list of namespaces to fetch from, by default the MW
API will not fetch from the default namespace, refered to as "(Main)"
in the documentation:
https://www.mediawiki.org/wiki/Manual:Namespace#Built-in_namespaces
I haven't found a way to address that "(Main)" namespace when getting
the namespace ids: indeed, when listing namespaces, there is no
"canonical" field for the main namespace, although there is a "*"
field that is set to "" (empty). So in theory, we could specify the
empty namespace to get the main namespace, but that would make
specifying namespaces harder for the user: we would need to teach
users about the "empty" default namespace. It would also make the code
more complicated: we'd need to parse quotes in the configuration.
So we simply override the query here and allow the user to specify
"(Main)" since that is the publicly documented name.
Signed-off-by: Antoine Beaupré <anarcat@debian.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Virtual namespaces do not correspond to pages in the database and are
automatically generated by MediaWiki. It makes little sense,
therefore, to fetch pages from those namespaces and the MW API doesn't
support listing those pages.
According to the documentation, those virtual namespaces are currently
"Special" (-1) and "Media" (-2) but we treat all negative namespaces
as "virtual" as a future-proofing mechanism.
Signed-off-by: Antoine Beaupré <anarcat@debian.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If we fail to find a requested namespace, we should tell the user
which ones we know about, since those were already fetched. This
allows users to fetch all namespaces by specifying a dummy namespace,
failing, then copying the list of namespaces in the config.
Eventually, we should have a flag that allows fetching all namespaces
automatically.
Reviewed-by: Antoine Beaupré <anarcat@debian.org>
Signed-off-by: Antoine Beaupré <anarcat@debian.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
There is code in post_read_index_from() to catch out of order
entries when reading an index file. This order verification is ~13%
of the cost of every call to read_index_from().
Update check_ce_order() so that it skips this verification unless
the "verify_ce_order" global variable is set.
Teach fsck to force this verification.
The effect can be seen using t/perf/p0002-read-cache.sh:
Test HEAD HEAD~1
--------------------------------------------------------------------------------------
0002.1: read_cache/discard_cache 1000 times 0.41(0.04+0.04) 0.50(0.00+0.10) +22.0%
Signed-off-by: Ben Peart <benpeart@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This not only prevents regressions, but also serves as documentation
what this new feature is expected to do.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
There are times when scripts want to know not only the name of the
push branch on the remote, but also the name of the branch as known
by the remote repository.
An example of this is when a tool wants to push to the very same branch
from which it would pull automatically, i.e. the `<remote>` and the `<to>`
in `git push <remote> <from>:<to>` would be provided by
`%(upstream:remotename)` and `%(upstream:remoteref)`, respectively.
This patch offers the new suffix :remoteref for the `upstream` and `push`
atoms, allowing to show exactly that. Example:
$ cat .git/config
...
[remote "origin"]
url = https://where.do.we.come/from
fetch = refs/heads/*:refs/remote/origin/*
[branch "master"]
remote = origin
merge = refs/heads/master
[branch "develop/with/topics"]
remote = origin
merge = refs/heads/develop/with/topics
...
$ git for-each-ref \
--format='%(push) %(push:remoteref)' \
refs/heads
refs/remotes/origin/master refs/heads/master
refs/remotes/origin/develop/with/topics refs/heads/develop/with/topics
Signed-off-by: J Wyman <jwyman@microsoft.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Factor out the commonalities from test_submodule_switch() and
test_submodule_forced_switch() in lib-submodule-update.sh, and document
their usage.
This also makes explicit (through the KNOWN_FAILURE_FORCED_SWITCH_TESTS
variable) the fact that, currently, all functionality tested using
test_submodule_forced_switch() do not correctly handle the situation in
which a submodule is replaced with an ordinary directory.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A new option --ignore-cr-at-eol tells the diff machinery to treat a
carriage-return at the end of a (complete) line as if it does not
exist.
Just like other "--ignore-*" options to ignore various kinds of
whitespace differences, this will help reviewing the real changes
you made without getting distracted by spurious CRLF<->LF conversion
made by your editor program.
Helped-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
[jch: squashed in command line completion by Dscho]
Signed-off-by: Junio C Hamano <gitster@pobox.com>