Since i wanted to limit the graph box size i was resetting
the window after an index of 5. This result in line joining
commit nodes to pass over nodes which are not related. The
changes fixes the same
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@gmail.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Running "git-am --resolved" without doing anything can create an empty
commit. Prevent it.
Thanks for Eric W. Biederman for spotting this.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This lowers the default merge threshold score to 75% from
earlier 80%. The break threshold stays the same at 50% for now,
but we might want to revisit it (and the rename detection limit
as well).
* break score: this much edit (both insertion of new material
and deletion of old material) needs to be there in the file
before we consider this _might_ be a rewrite and break the
filepair.
* merge score: after a filepair is broken by the above criteria
and goes through rename detection, if their pieces did not
match with other files as rename/copy, we merge them back
into one as if nothing happened. If the filepair had at
least this much deletion of old material, however, we say
this is completely rewritten with dissimilarity index X% when
we do so.
The updated delta code by Nico is so good that what we earlier
thought to be complete rewrite now reuses a lot more from the
source material (reducing the counted "delete"), so this
adjustment is needed to keep the perceived behaviour similar to
what we had earlier.
Signed-off-by: Junio C Hamano <junkio@cox.net>
The previous one wrongly coalesced a span with the next one
even though the span being added does not reach it.
Signed-off-by: Junio C Hamano <junkio@cox.net>
In windows you cannot remove current or opened directory,
an opened file, a running program, a loaded library, etc...
[jc: signoffs? With a minor quoting fix.]
Signed-off-by: Junio C Hamano <junkio@cox.net>
With the finer grained delta algorithm, count-delta algorithm
started overcounting copied source material, since the new delta
output tends to reuse the same source range more than once and
more aggressively. This broke an earlier assumption that the
number of bytes copied out from the source buffer is a good
approximation how much source material is actually remaining in
the result.
This uses fairly inefficient algorithm to keep track of ranges
of source material that are actually copied out to the
destination buffer. With this tweak, the obvious rename/break
detection tests in the testsuite start to work again.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This uses the same hashing algorithm to the "preferred base
tree" objects and the incoming pathnames, to group the same
files from different revs together, while spreading files with
the same basename in different directories.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Since we sort objects by type, hash, preferredness and then
size, after we have a delta against preferred base, there is no
point trying a delta with non-preferred base. This seems to
save expensive calls to diff-delta and it also seems to save the
output space as well.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This implements "eye candy" similar to the pack-object/unpack-object
to entertain users while a large tree is being checked out after
a clone or a pull.
Signed-off-by: Junio C Hamano <junkio@cox.net>
New tests are added to the git-rm test case to cover this as well.
Signed-off-by: Carl Worth <cworth@cworth.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This adds a git-rm command which provides convenience similar to
git-add, (and a bit more since it takes care of the rm as well if
given -f).
Like git-add, git-rm expands the given path names through
git-ls-files. This means it only acts on files listed in the
index. And it does act recursively on directories by default, (no -r
needed as in the case of rm itself). When it recurses, it does not
remove empty directories that are left behind.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Unless --no-tags flag was given, git-fetch tried to always
follow remote tags that point at the commits we picked up.
It is not very useful to pick up tags from remote unless storing
the fetched branch head in a local tracking branch. This is
especially true if the fetch is done to merge the remote branch
into our current branch as one-shot basis (i.e. "please pull"),
and is even harmful if the remote repository has many irrelevant
tags.
This proposed update disables the automated tag following unless
we are storing the a fetched branch head in a local tracking
branch.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This updates the progress output to match "every one second or
every percent whichever comes early" used by unpack-objects, as
discussed on the list.
Signed-off-by: Junio C Hamano <junkio@cox.net>
If that pack is big, it takes significant time to write and might
benefit from some more eye candies as well. This is however disabled
when the pack is written to stdout since in that case the output is
usually piped into unpack_objects which already does its own progress
reporting.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This provides a stable and simpler progress reporting mechanism that
updates progress as often as possible but accurately not updating more
than once a second. The deltification phase is also made more
interesting to watch (since repacking a big repository and only seeing a
dot appear once every many seconds is rather boring and doesn't provide
much food for anticipation).
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
"empty ident not allowed" error makes commit-tree fail, so we
are already safer in that we would not end up with commit
objects that have bogus names on the author or committer fields.
However, before commit-tree is called there are already changes
made to the index file and the working tree. The operation can
be resumed after fixing the environment problem, but when this
triggers to a newcomer with unusable gecos, the first question
becomes "what did I lose and how would I recover".
This patch modifies some Porcelainish commands to verify
GIT_COMMITTER_IDENT as soon as we know we are going to make some
commits before doing much damage to prevent confusion.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Previous one warned people upfront to encourage fixing their
environment early, but some people just use repositories and git
tools read-only without making any changes, and in such a case
there is not much point insisting on them having a usable ident.
This round attempts to move the error until either "git-var"
asks for the ident explicitly or "commit-tree" wants to use it.
Signed-off-by: Junio C Hamano <junkio@cox.net>
It appears that some people who did not care about having bogus
names in their own commit messages are bitten by the recent
change to require a sane environment [*1*].
While it was a good idea to prevent people from using bogus
names to create commits and doing sign-offs, the error message
is not very informative. This patch attempts to warn things
upfront and hint people how to fix their environments.
[Footnote]
*1* The thread is this one.
http://marc.theaimsgroup.com/?t=113868084800004
Especially this message.
http://marc.theaimsgroup.com/?m=113932830015032
Signed-off-by: Junio C Hamano <junkio@cox.net>
This tries to rework the solution for the excess delta chain
problem. An earlier commit worked it around ``cheaply'', but
repeated repacking risks unbound growth of delta chains.
This version counts the length of delta chain we are reusing
from the existing pack, and makes sure a base object that has
sufficiently long delta chain does not get deltified.
Signed-off-by: Junio C Hamano <junkio@cox.net>
A new flag -q makes underlying pack-objects less chatty.
A new flag -f forces delta to be recomputed from scratch.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This introduces --no-reuse-delta option to disable reusing of
existing delta, which is a large part of the optimization
introduced by this series. This may become necessary if
repeated repacking makes delta chain too long. With this, the
output of the command becomes identical to that of the older
implementation. But the performance suffers greatly.
It still allows reusing non-deltified representations; there is
no point uncompressing and recompressing the whole text.
It also adds a couple more statistics output, while squelching
it under -q flag, which the last round forgot to do.
$ time old-git-pack-objects --stdout >/dev/null <RL
Generating pack...
Done counting 184141 objects.
Packing 184141 objects....................
real 12m8.530s user 11m1.450s sys 0m57.920s
$ time git-pack-objects --stdout >/dev/null <RL
Generating pack...
Done counting 184141 objects.
Packing 184141 objects.....................
Total 184141, written 184141 (delta 138297), reused 178833 (delta 134081)
real 0m59.549s user 0m56.670s sys 0m2.400s
$ time git-pack-objects --stdout --no-reuse-delta >/dev/null <RL
Generating pack...
Done counting 184141 objects.
Packing 184141 objects.....................
Total 184141, written 184141 (delta 134833), reused 47904 (delta 0)
real 11m13.830s user 9m45.240s sys 0m44.330s
There is one remaining issue when --no-reuse-delta option is not
used. It can create delta chains that are deeper than specified.
A<--B<--C<--D E F G
Suppose we have a delta chain A to D (A is stored in full either
in a pack or as a loose object. B is depth1 delta relative to A,
C is depth2 delta relative to B...) with loose objects E, F, G.
And we are going to pack all of them.
B, C and D are left as delta against A, B and C respectively.
So A, E, F, and G are examined for deltification, and let's say
we decided to keep E expanded, and store the rest as deltas like
this:
E<--F<--G<--A
Oops. We ended up making D a bit too deep, didn't we? B, C and
D form a chain on top of A!
This is because we did not know what the final depth of A would
be, when we checked objects and decided to keep the existing
delta. Unfortunately, deferring the decision until just before
the deltification is not an option. To be able to make B, C,
and D candidates for deltification with the rest, we need to
know the type and final unexpanded size of them, but the major
part of the optimization comes from the fact that we do not read
the delta data to do so -- getting the final size is quite an
expensive operation.
To prevent this from happening, we should keep A from being
deltified. But how would we tell that, cheaply?
To do this most precisely, after check_object() runs, each
object that is used as the base object of some existing delta
needs to be marked with the maximum depth of the objects we
decided to keep deltified (in this case, D is depth 3 relative
to A, so if no other delta chain that is longer than 3 based on
A exists, mark A with 3). Then when attempting to deltify A, we
would take that number into account to see if the final delta
chain that leads to D becomes too deep.
However, this is a bit cumbersome to compute, so we would cheat
and reduce the maximum depth for A arbitrarily to depth/4 in
this implementation.
Signed-off-by: Junio C Hamano <junkio@cox.net>
When generating a new pack, notice if we have already needed
objects in existing packs. If an object is stored deltified,
and its base object is also what we are going to pack, then
reuse the existing deltified representation unconditionally,
bypassing all the expensive find_deltas() and try_deltas()
calls.
Also, notice if what we are going to write out exactly match
what is already in an existing pack (either deltified or just
compressed). In such a case, we can just copy it instead of
going through the usual uncompressing & recompressing cycle.
Without this patch, in linux-2.6 repository with about 1500
loose objects and a single mega pack:
$ git-rev-list --objects v2.6.16-rc3 >RL
$ wc -l RL
184141 RL
$ time git-pack-objects p <RL
Generating pack...
Done counting 184141 objects.
Packing 184141 objects....................
a1fc7b3e537fcb9b3c46b7505df859f0a11e79d2
real 12m4.323s
user 11m2.560s
sys 0m55.950s
With this patch, the same input:
$ time ../git.junio/git-pack-objects q <RL
Generating pack...
Done counting 184141 objects.
Packing 184141 objects.....................
a1fc7b3e537fcb9b3c46b7505df859f0a11e79d2
Total 184141, written 184141, reused 182441
real 1m2.608s
user 0m55.090s
sys 0m1.830s
Signed-off-by: Junio C Hamano <junkio@cox.net>
The real problem triggered an earlier fix was that an alternate
entry was pointing at a removed directory. Complaining on
object/pack directory that cannot be opendir-ed produces noise
in an ancient repository that does not have object/pack
directory and has never been packed.
Detect the real user error and report it. Also if opendir
failed for other reasons (e.g. no read permissions), report that
as well.
Spotted by Andrew Vasquez <andrew.vasquez@qlogic.com>.
Signed-off-by: Junio C Hamano <junkio@cox.net>
git-cvsserver is highly functional. However, not all methods are implemented,
and for those methods that are implemented, not all switches are implemented.
All the common read operations are implemented, and add/remove/commit are
supported.
Testing has been done using both the CLI CVS client, and the Eclipse CVS
plugin. Most functionality works fine with both of these clients.
Currently git-cvsserver only works over SSH connections, see the
Documentation for more details on how to configure your client. It
does not support pserver for anonymous access but it should not be
hard to implement. Anonymous access will need tighter input validation.
In our very informal tests, it seems to be significantly faster than a real
CVS server.
This utility depends on a version of git-cvsannotate that supports -S and on
DBD::SQLite.
Licensed under GPLv2. Copyright The Open University UK.
Authors: Martyn Smith <martyn@catalyst.net.nz>
Martin Langhoff <martin@catalyst.net.nz>
Signed-off-by: Martin Langhoff <martin@catalyst.net.nz>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Since Ryan's git-annotate is much faster, and has support for renames,
it is likely it goes into the mainstream git soon. Adapt it a little to
work with gitcvs, and actually use it.
Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Stephen C. Tweedie noticed that we give up running rev-list when
we see too many refs on the remote side. Limit the number of
negative references we give to rev-list and continue.
Not sending any negative references to rev-list is very bad --
we may be pushing a ref that is new to the other end.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Indexing based on adler32 has a match precision based on the block size
(currently 16). Lowering the block size would produce smaller deltas
but the indexing memory and computing cost increases significantly.
For optimal delta result the indexing block size should be 3 with an
increment of 1 (instead of 16 and 16). With such low params the adler32
becomes a clear overhead increasing the time for git-repack by a factor
of 3. And with such small blocks the adler 32 is not very useful as the
whole of the block bits can be used directly.
This patch replaces the adler32 with an open coded index value based on
3 characters directly. This gives sufficient bits for hashing and
allows for optimal delta with reasonable CPU cycles.
The resulting packs are 6% smaller on average. The increase in CPU time
is about 25%. But this cost is now hidden by the delta reuse patch
while the saving on data transfers is always there.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Testing for realloc and size limit can be done with only one test per
loop. Make it so and fix a theoretical off-by-one comparison error in
the process.
The output buffer memory allocation is also bounded by max_size when
specified.
Finally make some variable unsigned to allow the handling of files up to
4GB in size instead of 2GB.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>