* 'for-junio' of git://bogomips.org/git-svn:
git-svn: use SVN::Ra::get_dir2 when possible
git-svn: add space after "W:" prefix in warning
git-svn: (cleanup) remove editor param passing
git-svn: prepare SVN::Ra config pieces once
Git.pm: add specified name to tempfile template
git-svn: disable _rev_list memoization
git-svn: save a little memory as fetch progresses
git-svn: remove unnecessary DESTROY override
git-svn: reload RA every log-window-size
git-svn.txt: advertise pushurl with dcommit
git-svn: remove mergeinfo rev caching
git-svn: cache only mergeinfo revisions
git-svn: reduce check_cherry_pick cache overhead
git-svn: only look at the root path for svn:mergeinfo
git-svn: only look at the new parts of svn:mergeinfo
Allow painting or not painting (partial) matches in context lines
when showing "grep -C<num>" output in color.
* rs/grep-color-words:
grep: add color.grep.matchcontext and color.grep.matchselected
This avoids the following failure with normal "get_dir" on newer
versions of SVN (tested with SVN 1.8.8-1ubuntu3.1):
Incorrect parameters given: Could not convert '%ld' into a number
get_dir2 also has the potential to be more efficient by requesting
less data.
ref: <1414636504.45506.YahooMailBasic@web172304.mail.ir2.yahoo.com>
ref: <1414722617.89476.YahooMailBasic@web172305.mail.ir2.yahoo.com>
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Cc: Hin-Tak Leung <htl10@users.sourceforge.net>
The bundle_create() function has a number of logical steps:
process the input, write the refs, and write the packfile.
Recent commits split the first and third into separate
sub-functions. It's worth splitting the middle step out,
too, if only because it makes the progression of the steps
more obvious.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The new helper compute_and_write_prerequistes() is ugly, but it
cannot be avoided. Ideally we should avoid a function that computes
and does I/O at the same time, but the prerequisites lines in the
output needs the human readable title only to help the recipient of
the bundle. The code copies them straight from the rev-list output
and immediately discards as no other internal computation needs that
information.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The create_bundle() function, while it does one single logical
thing, takes a rather large implementation to do so.
Let's start separating what it does into smaller steps to make it
easier to see what is going on. This is a first step to separate
out the actual pack-data generation, after the earlier part of the
function figures out which part of the history to place in the
bundle.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The loop in cache-tree's update_one iterates over all the
entries in the index. For each one, we find the cache-tree
subtree which represents our path (creating it if
necessary), and then recurse into update_one again. The
return value we get is the number of index entries that
belonged in that subtree. So for example, with entries:
a/one
a/two
b/one
We start by processing the first entry, "a/one". We would
find the subtree for "a" and recurse into update_one. That
would then handle "a/one" and "a/two", and return the value
2. The parent function then skips past the 2 handled
entries, and we continue by processing "b/one".
If the recursed-into update_one ever returns 0, then we make
no forward progress in our loop. We would process "a/one"
over and over, infinitely.
This should not happen normally. Any subtree we create must
have at least one path in it (the one that we are
processing!). However, we may also reuse a cache-tree entry
we found in the on-disk index. For the same reason, this
should also never have zero entries. However, certain buggy
versions of libgit2 could produce such bogus cache-tree
records. The libgit2 bug has since been fixed, but it does
not hurt to protect ourselves against bogus input coming
from the on-disk data structures.
Note that this is not a die("BUG") or assert, because it is
not an internal bug, but rather a corrupted on-disk
structure. It's possible that we could even recover from it
(by throwing out the bogus cache-tree entry), but it is not
worth the effort; the important thing is that we report an
error instead of looping infinitely.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* git://ozlabs.org/~paulus/gitk:
gitk: Remove boilerplate for configuration variables
gitk: Show detached HEAD if --all is specified
gitk: Do not depend on Cygwin's "kill" command on Windows
If HEAD is detached, 'gitk --all' does not show it. This is inconvenient
for frontend program, and for example git log does show the detached HEAD.
gitk uses git rev-parse to find a list of branches to show.
Apparently, the command does not include detached HEAD to output if
--all argument is specified. This has been discussed in [1] and stated
as expected behavior. So rev-parse's parameters should be tuned in gitk.
[1] http://thread.gmane.org/gmane.comp.version-control.git/255996
Signed-off-by: Max Kirillov <max@max630.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Windows does not necessarily mean Cygwin, it could also be MSYS. The
latter ships with a version of "kill" that does not understand "-f".
In msysgit this was addressed by shipping Cygwin's version of kill.
Properly fix this by using the stock Windows "taskkill" command instead,
which is available since Windows XP Professional.
Signed-off-by: Sebastian Schuberth <sschuberth@gmail.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Memoizing these initialization functions saves some memory for
long fetches which require scanning many unwanted revisions
before any wanted revisions happen.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
This should help me track down errors in git-svn more easily:
write .git/Git_XXXXXX: Bad file descriptor
at /usr/lib/perl5/SVN/Ra.pm line 623
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Allow diff tool backend to stop early by exiting with a non-zero
status.
* da/difftool:
difftool: add support for --trust-exit-code
difftool--helper: exit when reading a prompt answer fails
In a tarball extract whose files are all read-only, running GPG
tests would have failed due to unwritable files.
* mg/lib-gpg-ro-safety:
t/lib-gpg: make gpghome files writable
z/OS port
* dm/port2zos:
compat/bswap.h: detect endianness from XL C compiler macros
Makefile: reorder linker flags in the git executable rule
git-compat-util.h: support variadic macros with the XL C compiler
Tighten the logic to decide that an unreachable cruft is
sufficiently old by covering corner cases such as an ancient object
becoming reachable and then going unreachable again, in which case
its retention period should be prolonged.
* jk/prune-mtime: (28 commits)
drop add_object_array_with_mode
revision: remove definition of unused 'add_object' function
pack-objects: double-check options before discarding objects
repack: pack objects mentioned by the index
pack-objects: use argv_array
reachable: use revision machinery's --indexed-objects code
rev-list: add --indexed-objects option
rev-list: document --reflog option
t5516: test pushing a tag of an otherwise unreferenced blob
traverse_commit_list: support pending blobs/trees with paths
make add_object_array_with_context interface more sane
write_sha1_file: freshen existing objects
pack-objects: match prune logic for discarding objects
pack-objects: refactor unpack-unreachable expiration check
prune: keep objects reachable from recent objects
sha1_file: add for_each iterators for loose and packed objects
count-objects: use for_each_loose_file_in_objdir
count-objects: do not use xsize_t when counting object size
prune-packed: use for_each_loose_file_in_objdir
reachable: mark index blobs as SEEN
...
Add machinery to alternatively use AsciiDoctor to format our
documentation.
* bc/asciidoctor:
Documentation: remove Asciidoctor linkgit macro
Documentation: refactor common operations into variables
Documentation: implement linkgit macro for Asciidoctor
Documentation: move some AsciiDoc parameters into variables
Call child_process_init() instead of zeroing the memory of variables of
type struct child_process by hand before use because the former is both
clearer and shorter.
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If the asynchronous start of copy_to_sideband() fails, then any
env_array entries added to struct child_process proc by
prepare_push_cert_sha1() are leaked. Call the latter function only
after start_async() succeeded so that the allocated entries are
cleaned up automatically by start_command() or finish_command().
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Teach difftool to exit when a diff tool returns a non-zero exit
code when either --trust-exit-code is specified or
difftool.trustExitCode is true.
Forward exit codes from invoked diff tools to the caller when
--trust-exit-code is used.
Suggested-by: Adri Farr <14farresa@gmail.com>
Helped-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: David Aguilar <davvid@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The config option color.grep.match can be used to specify the highlighting
color for matching strings. Add the options matchContext and matchSelected
to allow different colors to be specified for matching strings in the
context vs. in selected lines. This is similar to the ms and mc specifiers
in GNU grep's environment variable GREP_COLORS.
Tests are from Zoltan Klinger's earlier attempt to solve the same
issue in a different way.
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It took me a long time to notice the rider on the pack.threads
configuration option that it would multiple the memory consumption
by the number of CPUs in the machine. Clarify that the limit
applies per-thread.
Signed-off-by: Robert de Bath <rdebath@tvisiontech.co.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
t/lib-gpg.sh copies the test environment's gpg home to the trash
directory and makes sure the directoty is writable.
Make sure the copied files are writable, too.
Signed-off-by: Michael J Gruber <git@drmicha.warpmail.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Asciidoctor provides an extension implementing a backend-independent
macro for dealing with manpage links just like the linkgit macro. As
this is more likely to be up-to-date with future changes in Asciidoctor,
prefer using it over reimplementing in Git.
This reverts commit 773ee47c2b.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The Makefile performs several very similar tasks to convert AsciiDoc
files into either HTML or DocBook. Move these items into variables to
reduce the duplication.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
There is no /usr/include/endian.h equivalent on z/OS, but the
compiler will define macros to indicate endianness on host and
target hardware. This adds a test for these macros as a last
resort for determining byte order.
Signed-off-by: David Michael <fedora.dm0@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The XL C compiler can fail due to mixing library path and object
file arguments, for example when linking git while building with
"gmake LDFLAGS=-L$prefix/lib".
Move the ALL_LDFLAGS variable expansion in the git executable rule
to be consistent with all the other linking rules, namely to have
LDFLAGS such as -L$where before the object files *.o being linked
together.
Signed-off-by: David Michael <fedora.dm0@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When the XL C compiler is run with an appropriate language level or
suboption, it defines a feature test macro to indicate support for
variadic macros by defining __C99_MACRO_WITH_VA_ARGS C preprocessor
macro.
This was tested on z/OS, but it should also work on AIX according
to IBM documentation.
Signed-off-by: David Michael <fedora.dm0@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
An attempt to quit difftool by hitting Ctrl-D (EOF) at its prompt does
not quit it, but is treated as if 'yes' was answered to the prompt and
all following prompts, which is contrary to the user's intent. Fix the
error check.
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This memoization appears unneeded as the check_cherry_pick2 cache is
in front of it does enough.
With this change applied, importing from local svn+ssh and http copies
of the R repo[1] takes only 2:00 (2 hours) on my system and the git-svn
process never uses more than 60MB RSS on my x86-64 GNU/Linux system[2].
This 60M measurement is only for the git-svn Perl process itself and
does not include memory used by git subprocesses accessing large packs
(subprocess memory usage _is_ measured by my time(1) tool).
Before this change, an import took longer (2:20) on svn+ssh:// but
git-svn used around 240MB during the imports. Worse yet, git-svn
ballooned to over 400M when writing out the cache to the filesystem.
I also tried removing memoization for `has_no_changes', too, but a
local copy of the R repository(*) was not close to finishing within
10 hours on my system.
[1] http://svn.r-project.org/R
[2] file:// repos causes libsvn to use more memory internally
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Cc: Hin-Tak Leung <htl10@users.sourceforge.net>
There is no reason to keep entries in the %revs hash after we're
done processing a revision, so allow entries become freed as
processing continues.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
This override was probably never necessary, but most likely a no-op
as it does not appear to do anything in SVN::Ra itself.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Despite attempting to use local memory pools everywhere we can,
(including our call to SVN::Ra::do_update and all subsequent reporter
calls), there does not appear to be a way to force the Git::SVN::Fetcher
callbacks to use a pool other than the per-SVN::Ra pool.
Git::SVN::Fetcher ends up using the main RA pool which grows
monotonically in size for the lifetime of the RA object.
Thus the only way to free that memory appears to be to destroy and
recreate the RA connection for at every --log-window-size interval.
This reduces memory usage over the course of fetching 10K revisions
using a test repository created with the script at the end of this
commit message.
As reported by time(1) on my x86-64 system:
before: 54024k
after: 28680k
Unfortunately, there remains some yet-to-be-tracked-down slow memory
growth which would be evident as the `nr' parameter increases in
the repository generation script:
-----------------------------8<------------------------------
set -e
tmp=$(mktemp -d svntestrepo-XXXXXXXX)
svnadmin create "$tmp"
repo=file://"$(cd $tmp && pwd)"
svn co "$repo" "$tmp/wd"
cd "$tmp/wd"
if ! test -f a
then
> a
svn add a
svn commit -m 'A'
fi
nr=10000
while test $nr -gt 0
do
echo $nr > a
svn commit -q -m A
nr=$((nr - 1))
done
echo "repository created in $repo"
-----------------------------8<------------------------------
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Advertise that the svn-remote.<name>.pushurl config key allows specifying
the commit URL for the entire SVN repository in the documentation of the
git svn dcommit command.
Signed-off-by: Sveinung Kvilhaugsvik <sveinung84@users.sourceforge.net>
Signed-off-by: Eric Wong <normalperson@yhbt.net>