When we pack everything into one big pack with "git repack
-Ad", any unreferenced objects in to-be-deleted packs are
exploded into loose objects, with the intent that they will
be examined and possibly cleaned up by the next run of "git
prune".
Since the exploded objects will receive the mtime of the
pack from which they come, if the source pack is old, those
loose objects will end up pruned immediately. In that case,
it is much more efficient to skip the exploding step
entirely for these objects.
This patch teaches pack-objects to receive the expiration
information and avoid writing these objects out. It also
teaches "git gc" to pass the value of gc.pruneexpire to
repack (which in turn learns to pass it along to
pack-objects) so that this optimization happens
automatically during "git gc" and "git gc --auto".
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk>
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The existing label import code looks at each commit being
imported, and then checks for labels at that commit. This
doesn't work in the real world though because it will drop
labels applied on changelists that have already been imported,
a common pattern.
This change adds a new --import-labels option. With this option,
at the end of the sync, git p4 gets sets of labels in p4 and git,
and then creates a git tag for each missing p4 label.
This means that tags created on older changelists are
still imported.
Tags that could not be imported are added to an ignore
list.
The same sets of git and p4 tags and labels can also be used to
derive a list of git tags to export to p4. This is enabled with
--export-labels in 'git p4 submit'.
Signed-off-by: Luke Diamand <luke@diamand.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If P4EDITOR is defined, the tests will fail when "git p4" starts an
editor.
Signed-off-by: Luke Diamand <luke@diamand.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The print_feed_meta() subroutine generates links for feeds with and
without merges, in RSS and Atom formats. However because %href_params
was not properly reset, it generated links with "--no-merges" for all
except the very first link.
Before:
<link rel="alternate" title="[..] - Atom feed" href="/?p=.git;a=atom;opt=--no-merges" type="application/atom+xml" />
<link rel="alternate" title="[..] - Atom feed (no merges)" href="/?p=.git;a=atom;opt=--no-merges" type="application/atom+xml" />
After:
<link rel="alternate" title="[..] - Atom feed" href="/?p=.git;a=atom" type="application/atom+xml" />
<link rel="alternate" title="[..] - Atom feed (no merges)" href="/?p=.git;a=atom;opt=--no-merges" type="application/atom+xml" />
Signed-off-by: Sebastian Pipping <sebastian@pipping.org>
Signed-off-by: Jakub Narebski <jnareb@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Prefer:
test_line_count <OP> COUNT FILE
over:
test $(wc -l <FILE) <OP> COUNT
(or similar usages) in several tests.
Signed-off-by: Stefano Lattarini <stefano.lattarini@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Speed up prepare_revision_walk() by adding commits without sorting
to the commit_list and at the end sort the list in one go. Thanks
to mergesort() working behind the scenes, this is a lot faster for
large numbers of commits than the current insert sort.
Also introduce and use commit_list_reverse(), to keep the ordering
of commits sharing the same commit date unchanged. That's because
commit_list_insert_by_date() sorts commits with descending date,
but adds later entries with the same date entries last, while
commit_list_insert() always inserts entries at the top. The
following commit_list_sort_by_date() keeps the order of entries
sharing the same date.
Jeff's test case, in a repo with lots of refs, was to run:
# make a new commit on top of HEAD, but not yet referenced
sha1=`git commit-tree HEAD^{tree} -p HEAD </dev/null`
# now do the same "connected" test that receive-pack would do
git rev-list --objects $sha1 --not --all
With a git.git with a ref for each revision, master needs (best of
five):
real 0m2.210s
user 0m2.188s
sys 0m0.016s
And with this patch:
real 0m0.480s
user 0m0.456s
sys 0m0.020s
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Replace the insertion sort in commit_list_sort_by_date() with a
call to the generic mergesort function. This sets the stage for
using commit_list_sort_by_date() for larger lists, as shown in
the next patch.
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This adds a generic bottom-up mergesort implementation for singly linked
lists. It was inspired by Simon Tatham's webpage on the topic[1], but
not so much by his implementation -- for no good reason, really, just a
case of NIH.
[1] http://www.chiark.greenend.org.uk/~sgtatham/algorithms/listsort.html
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The allocations made by unpack_nondirectories() using create_ce_entry()
are never freed.
In the non-merge case, we duplicate them using add_entry() and later
only look at the first allocated element (src[0]), perhaps even only
by mistake. Split out the actual addition from add_entry() into the
new helper do_add_entry() and call this non-duplicating function
instead of add_entry() to avoid the leak.
Valgrind reports this for the command "git archive v1.7.9" without
the patch:
==13372== LEAK SUMMARY:
==13372== definitely lost: 230,986 bytes in 2,325 blocks
==13372== indirectly lost: 0 bytes in 0 blocks
==13372== possibly lost: 98 bytes in 1 blocks
==13372== still reachable: 2,259,198 bytes in 3,243 blocks
==13372== suppressed: 0 bytes in 0 blocks
And with the patch applied:
==13375== LEAK SUMMARY:
==13375== definitely lost: 65 bytes in 1 blocks
==13375== indirectly lost: 0 bytes in 0 blocks
==13375== possibly lost: 0 bytes in 0 blocks
==13375== still reachable: 2,364,417 bytes in 3,245 blocks
==13375== suppressed: 0 bytes in 0 blocks
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
src[0] points to the index entry in the merge case and to the first
tree to unpack in the non-merge case. We only want to mark the index
entry, so check first if we're merging.
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If the base argument has a "/" chararacter, then only iterate over the
reference subdir whose name is the part up to the last "/".
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Store references hierarchically in a tree that matches the
pseudo-directory structure of the reference names. Add a new kind of
ref_entry (with flag REF_DIR) to represent a whole subdirectory of
references. Sort ref_dirs one subdirectory at a time.
NOTE: the dirs can now be sorted as a side-effect of other function
calls. Therefore, it would be problematic to do something from a
each_ref_fn callback that could provoke the sorting of a directory
that is currently being iterated over (i.e., the directory containing
the entry that is being processed or any of its parents).
This is a bit far-fetched, because a directory is always sorted just
before being iterated over. Therefore, read-only accesses cannot
trigger the sorting of a directory whose iteration has already
started. But if a callback function would add a reference to a parent
directory of the reference in the iteration, then try to resolve a
reference under that directory, a re-sort could be triggered and cause
the iteration to work incorrectly.
Nevertheless...add a comment in refs.h warning against modifications
during iteration.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Use the more usual indexing idiom for clarity.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This purely textual change is in preparation for storing references
hierarchically, when the old ref_array structure will represent one
"directory" of references. Rename functions that deal with this
structure analogously, and also rename the structure's "refs" member
to "entries".
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This change is obviously silly by itself, but it is a step towards
adding a second member to the union.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Return 0 (instead of -1) for zero-length components. Move the
interpretation of zero-length components as illegal to
check_refname_format().
This will make it easier to extend check_refname_format() to also
check whether directory names are valid.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a function free_ref_entry(). This function will become nontrivial
when ref_entry (soon) becomes polymorphic.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Save a bunch of lines of code and a couple of strlen() calls.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It costs a bit of boilerplate, but it means that the function can be
ignorant of how cached refs are stored.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Extract function do_for_each_ref_in_arrays() from do_for_each_ref().
The new function will be a useful building block for storing refs
hierarchically.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Extract function do_for_each_ref_in_array() from do_for_each_ref().
The new function will be a useful building block for storing refs
hierarchically.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Set and clear current_ref within do_one_ref() instead of setting it
here and leaving it to somebody else to clear it.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Reorder definitions in file: first check_refname_format() and helper
functions, then the functions for managing the ref_entry and ref_array
data structures, then ref_cache, then the more "business-logicky"
stuff. No code is changed.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Document the default pager and editor chosen at compile time in the
git-var(1) manpage so users curious about what command _this_ copy of
git will fall back to when EDITOR, VISUAL, and PAGER are unset can
find the answer quickly.
In builds leaving those settings uncustomized, this patch makes the
manpage continue to say "usually vi" and "usually less" so the
formatted documentation is usable for a wide audience including users
of custom builds that change those settings. If you would like your
copy of the docs to be less noncommittal, you will need to set
DEFAULT_PAGER=less and DEFAULT_EDITOR=vi explicitly.
Suggested-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is main test case for the original problem that triggered this
patch series. We create a repo with 50k tags and then test whether
git-clone over the smart HTTP protocol succeeds.
Note that we construct the repo in a slightly different way than the
original script used to reproduce the problem. This is because the
original script just created 50k tags all pointing to the same commit,
so if there was a bug where remote-curl.c was not passing all the refs
to fetch-pack we wouldn't know. The clone would succeed even if only one
tag was passed, because all the other tags were pointing at the same SHA
and would be considered present.
Instead we create a repo with 50k independent (dangling) commits and
then tag each of those commits with a unique tag. This way if one of the
tags is not given to fetch-pack, later stages of the clone would
complain about it.
This allows us to test both that the command line overflow was fixed, as
well as that it was fixed in a way that doesn't leave out any of the
refs.
Signed-off-by: Ivan Todoroski <grnch@gmx.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
These test cases focus only on testing the parsing of refs on stdin,
without bothering with the rest of the fetch-pack machinery. We pass in
the refs using different combinations of command line and stdin and then
we watch fetch-pack's stdout to see whether it prints all the refs we
specified (but we ignore their order).
Signed-off-by: Ivan Todoroski <grnch@gmx.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Now that we can throw an arbitrary number of refs at fetch-pack using
its --stdin option, we use it in the remote-curl helper to bypass the
OS command line length limit.
Signed-off-by: Ivan Todoroski <grnch@gmx.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The syntax for the use of mark references in fast-import
demands either a SP (space) or LF (end-of-line) after
a mark reference. Fast-import does not complain when garbage
appears after a mark reference in some cases.
Factor out parsing of mark references and complain if
errant characters are found. Also be a little more careful
when parsing "inline" and SHA1s, complaining if extra
characters appear or if the form of the dataref is unrecognized.
Buggy input can cause fast-import to produce the wrong output,
silently, without error. This makes it difficult to track
down buggy generators of fast-import streams. An example is
seen in the last line of this commit command:
commit refs/heads/S2
committer Name <name@example.com> 1112912893 -0400
data <<COMMIT
commit message
COMMIT
from :1M 100644 :103 hello.c
It is missing a newline and should be:
[...]
from :1
M 100644 :103 hello.c
What fast-import does is to produce a commit with the same
contents for hello.c as in refs/heads/S2^. What the buggy
program was expecting was the contents of blob :103. While
the resulting commit graph looked correct, the contents in
some commits were wrong.
Signed-off-by: Pete Wyckoff <pw@padd.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Check if we even have a parameter before checking its value. Running
this command without any arguments may not make a lot of sense, but
reacting with a segmentation fault is unduly harsh.
While we're at it, avoid casting argv by declaring it const right away.
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add void to make it match its definition in submodule.c.
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Print out a trailing newline when --show-prefix is run with cwd
at the top level of the tree which results in an empty prefix.
Behavior is now like --show-cdup.
Fixes an expected failure in t1501.
Signed-off-by: Ross Lagerwall <rosslagerwall@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
HTTP authentication is currently handled by get_refs and fetch_ref, but
not by fetch_object, fetch_pack or fetch_alternates. In the
single-threaded case, this is not an issue, since get_refs is always
called first. It recognigzes the 401 and prompts the user for
credentials, which will then be used subsequently.
If the curl multi interface is used, however, only the multi handle used
by get_refs will have credentials configured. Requests made by other
handles fail with an authentication error.
Fix this by setting CURLOPT_USERPWD whenever a slot is requested.
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Create a repo with multiple loose objects in order to demonstrate http
authentication breakage.
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a Makefile to run subtree tests. This is largely copied
from the standard test suite with irrelevant targets removed
and some paths altered to account for where subtree tests live.
Signed-off-by: David A. Greene <greened@obbligato.org>
Build git-subtree in its contrib directory and install from there.
The main Makefile no longer discovers subcommands build in the main
build area so we cannot count on it to install git-subtree. The user
should make && make install in contrib/subtree to install git-subtree.
Change the rule to install the git-subtree manpage. The main
Documentation area doesn't directly support installing documentation
from other directories so the user will have to do that from within
contrib/subtree for now.
Signed-off-by: David A. Greene <greened@obbligato.org>
Include config.make.autogen in the git-subtree contrib area to pick up
settings for prefix and other such things.
Signed-off-by: David A. Greene <greened@obbligato.org>
Remove various files that simply duplicate functionality already
provided by the main project files.
Signed-off-by: David A. Greene <greened@obbligato.org>
Set TEST_DIRECTORY to the main git test area. This allows the
git-subtree out-of-tree tests to run correctly.
Signed-off-by: David A. Greene <greened@obbligato.org>
Redo the hashing loop in xdl_hash_record in a way that loads an entire
'long' at a time, using masking tricks to see when and where we found
the terminating '\n'.
I stole inspiration and code from the posts by Linus Torvalds around
https://lkml.org/lkml/2012/3/2/452https://lkml.org/lkml/2012/3/5/6
His method reads the buffers in sizeof(long) increments, and may thus
overrun it by at most sizeof(long)-1 bytes before it sees the final
newline (or hits the buffer length check). I considered padding out
all buffers by a suitable amount to "catch" the overrun, but
* this does not work for mmap()'d buffers: if you map 4096+8 bytes
from a 4096 byte file, accessing the last 8 bytes results in a
SIGBUS on my machine; and
* it would also be extremely ugly because it intrudes deep into the
unpacking machinery.
So I adapted it to not read beyond the buffer at all. Instead, it
reads the final partial word byte-by-byte and strings it together.
Then it can use the same logic as before to finish the hashing.
So far we enable this only on x86_64, where it provides nice speedup
for diff-related work:
Test origin/next tr/xdiff-fast-hash
-----------------------------------------------------------------------------
4000.1: log -3000 (baseline) 0.07(0.05+0.02) 0.08(0.06+0.02) +14.3%
4000.2: log --raw -3000 (tree-only) 0.37(0.33+0.04) 0.37(0.32+0.04) +0.0%
4000.3: log -p -3000 (Myers) 1.75(1.65+0.09) 1.60(1.49+0.10) -8.6%
4000.4: log -p -3000 --histogram 1.73(1.62+0.09) 1.58(1.49+0.08) -8.7%
4000.5: log -p -3000 --patience 2.11(2.00+0.10) 1.94(1.80+0.11) -8.1%
Perhaps other platforms could also benefit. However it does NOT work
on big-endian systems!
[jc: minimum style and compilation fixes]
Signed-off-by: Thomas Rast <trast@student.ethz.ch>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When git-rebase--interactive stops due to a conflict and the only change
to be committed is in a submodule, the test for whether there is
anything to be committed ignores the staged submodule change. This
leads rebase to skip creating the commit for the change.
While unstaged submodule changes should be ignored to avoid needing to
update submodules during a rebase, it is safe to remove the
--ignore-submodules option to diff-index because --cached ensures that
it is only checking the index. This was discussed in [1] and a test is
included to ensure that unstaged changes are still ignored correctly.
[1] http://thread.gmane.org/gmane.comp.version-control.git/188713
Signed-off-by: John Keeping <john@keeping.me.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add missing options "--tags|--no-tags" and "--push".
Signed-off-by: Michael Schubert <mschub@elegosoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>