Commit Graph

9339 Commits

Author SHA1 Message Date
Nicolas Pitre
5c49c11686 pack-objects: better check_object() performances
With large amount of objects, check_object() is really trashing the pack
sliding map and the filesystem cache.  It has a completely random access
pattern especially with old objects where delta replay jumps back and
forth all over the pack.

This patch improves things by:

 1) sorting objects by their offset in pack before calling check_object()
    so the pack access pattern is linear;

 2) recording the object type at add_object_entry() time since it is
    already known in most cases;

 3) recording the pack offset even for preferred_base objects;

 4) avoid calling sha1_object_info() if all possible.

This limits pack accesses to the bare minimum and makes them perfectly
linear.

In the process check_object() was made more clear (to me at least).

Note: I thought about walking the sorted_by_offset list backward in
get_object_details() so if a pack happens to be larger than the available
file cache, then the cache would have been populated with useful data from
the beginning of the pack already when find_deltas() is called.  Strangely,
testing (on Linux) showed absolutely no performance difference.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-16 17:43:31 -07:00
Nicolas Pitre
54dab52ae8 add get_size_from_delta()
... which consists of existing code split out of packed_delta_info()
for other callers to use it as well.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-16 17:43:31 -07:00
Nicolas Pitre
a3fbf4dfe1 pack-objects: make in_pack_header_size a variable of its own
It currently aliases delta_size on the principle that reused deltas won't
go through the whole delta matching loop hence delta_size was unused.
This is not true if given delta doesn't find its base in the pack though.
But we need that information even for whole object data reuse.

Well in short the current state looks awful and is prone to bugs.  It just
works fine now because try_delta() tests trg_entry->delta before using
trg_entry->delta_size, but that is a bit subtle and I was wondering for a
while why things just worked fine... even if I'm guilty of having
introduced this abomination myself in the first place.

Let's do the sensible thing instead with no ambiguity, which is to have
a separate variable for in_pack_header_size.  This might even help future
optimizations.

While at it, let's reorder some struct object_entry members so they all
align well with their own width, regardless of the architecture or the
size of off_t.  Some memory saving is to be expected with this alone.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-16 17:43:31 -07:00
Nicolas Pitre
81a216a5d6 pack-objects: get rid of create_final_object_list()
Because we don't have to know the SHA1 h(hence the name) of the pack
up front anymore, let's get rid of yet another global sorted object list
and sort them only in write_index_file(), then compute the object list
SHA1 on the fly.

This has the advantage of saving another chunk of memory, and the sorted
list SHA1 won't be computed needlessly on servers during a fetch.

Of course the cunning plan is also to make write_index_file() much like
the function with the same name in index-pack.c for an eventual easy
sharing.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-16 17:43:31 -07:00
Nicolas Pitre
f7ae6a930a pack-objects: get rid of reuse_cached_pack
This capability is practically never useful, and therefore never tested,
because it is fairly unlikely that the requested pack will be already
available.  Furthermore it is of little gain over the ability to reuse
existing pack data.

In fact the ability to change delta type on the fly when reusing delta
data is a nice thing that has almost no cost and allows greater backward
compatibility with a client's capabilities than if the client is blindly
sent a whole pack without any discrimination.

And this "feature" is simply in the way of other cleanups.
Let's get rid of it.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-16 17:43:31 -07:00
Nicolas Pitre
9668cf59a8 pack-objects: clean up list sorting
Get rid of sort_comparator() as it impose a run time double indirect
function call for little compile time type checking gain.

Also get rid of create_sorted_list() as it only has one user which would
as well be just fine doing its sorting locally.  Eventually the list of
deltifiable objects might be shorter than the whole object list.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-16 17:43:31 -07:00
Nicolas Pitre
898b14cedc pack-objects: rework check_delta_limit usage
Objects that have delta "children" from pack data reuse must consider the
depth of their deepest child when they try to deltify themselves for those
children not to become too deep.

However, in the context of a "thin" pack, the delta children depth was
skipped entirely on the presumption that the pack was always going to be
exploded on the receiving end, hence the delta length wasn't an issue.

Now that we keep received packs as is and reuse pack data when repacking,
those packs do contain delta chains that are longer than expected. Worse,
those delta chain may even grow longer when the pack is further repacked
into another thin pack for a subsequent transmission.

So this patch restores strict delta length even for thin packs, and it
moves check_delta_limit() usage directly in the delta loop where it is
needed.  This way the delta_limit can be removed from struct object_entry
as well.  Oh and the initial value was wrong too.

The  progress_interval() function was moved to a more logical location in
the process.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-16 17:43:31 -07:00
Nicolas Pitre
adcc70950e pack-objects: equal objects in size should delta against newer objects
Before finding best delta combinations, we sort objects by name hash,
then by size, then by their position in memory.  Then we walk the list
backwards to test delta candidates.

We hope that a bigger size usually means a newer objects.  But a bigger
address in memory does not mean a newer object.  So the last comparison
must be reversed.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-16 17:43:30 -07:00
Nicolas Pitre
8a5a8d6c97 pack-objects: optimize preferred base handling a bit
Let's avoid some cycles when there is no base to test against, and avoid
unnecessary object lookups.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-16 17:43:30 -07:00
Nicolas Pitre
29b734e478 clean up add_object_entry()
This function used to call locate_object_entry_hash() _twice_ per added
object while only once should suffice. Let's reorganize that code a bit.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-11 19:35:25 -07:00
Nicolas Pitre
6e5417769c tests for various pack index features
This is a fairly complete list of tests for various aspects of pack
index versions 1 and  2.

Tests on index v2 include 32-bit and 64-bit offsets, as well as a nice
demonstration of the flawed repacking integrity checks that index
version 2 intend to solve over index version 1 with the per object CRC.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-11 19:32:03 -07:00
Nicolas Pitre
39551b6926 use test-genrandom in tests instead of /dev/urandom
This way tests are completely deterministic and possibly more portable.

Signed-off-by: Nicolas Pitre <nico@cam.org>
2007-04-11 19:31:24 -07:00
Nicolas Pitre
2dca1af448 simple random data generator for tests
Reliance on /dev/urandom produces test vectors that are, well, random.
This can cause problems impossible to track down when the data is
different from one test invokation to another.

The goal is not to have random data to test, but rather to have a
convenient way to create sets of large files with non compressible and
non deltifiable data in a reproducible way.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-11 19:23:32 -07:00
Nicolas Pitre
91ecbeca48 validate reused pack data with CRC when possible
This replaces the inflate validation with a CRC validation when reusing
data from a pack which uses index version 2.  That makes repacking much
safer against corruptions, and it should be a bit faster too.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-10 12:48:14 -07:00
Nicolas Pitre
4ba7d71153 allow forcing index v2 and 64-bit offset treshold
This is necessary for testing the new capabilities in some automated
way without having an actual 4GB+ pack.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-10 12:48:14 -07:00
Nicolas Pitre
8c681e07c9 pack-redundant.c: learn about index v2
Initially the conversion was made using nth_packed_object_sha1() which
made this file completely index version agnostic. Unfortunately the
overhead was quite significant so I went back to raw index walking but
with selectable base and step values which brought back similar
performances as the original.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-10 12:48:14 -07:00
Nicolas Pitre
32637cdf4a show-index.c: learn about index v2
When index v2 is encountered, the CRC32 of each object is also displayed
in parenthesis at the end of the line.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-10 12:48:14 -07:00
Nicolas Pitre
74e34e1fca sha1_file.c: learn about index version 2
With this patch, packs larger than 4GB are usable, even on a 32-bit machine
(at least on Linux).  If off_t is not large enough to deal with a large
pack then die() is called instead of attempting to use the pack and
producing garbage.

This was tested with a 8GB pack specially created for the occasion on
a 32-bit machine.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-10 12:48:14 -07:00
Nicolas Pitre
d1a46a9eab index-pack: learn about pack index version 2
Like previous patch but for index-pack.

[ There is quite some code duplication between pack-objects and index-pack
  for generating a pack index (and fast-import as well I suppose).  This
  should be reworked into a common function eventually. But not now. ]

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-10 12:48:14 -07:00
Nicolas Pitre
c553ca25bd pack-objects: learn about pack index version 2
Pack index version 2 goes as follows:

 - 8 bytes of header with signature and version.

 - 256 entries of 4-byte first-level fan-out table.

 - Table of sorted 20-byte SHA1 records for each object in pack.

 - Table of 4-byte CRC32 entries for raw pack object data.

 - Table of 4-byte offset entries for objects in the pack if offset is
   representable with 31 bits or less, otherwise it is an index in the next
   table with top bit set.

 - Table of 8-byte offset entries indexed from previous table for offsets
   which are 32 bits or more (optional).

 - 20-byte SHA1 checksum of sorted object names.

 - 20-byte SHA1 checksum of the above.

The object SHA1 table is all contiguous so future pack format that would
contain this table directly won't require big changes to the code. It is
also tighter for slightly better cache locality when looking up entries.

Support for large packs exceeding 31 bits in size won't impose an index
size bloat for packs within that range that don't need a 64-bit offset.
And because newer objects which are likely to be the most frequently used
are located at the beginning of the pack, they won't pay the 64-bit offset
lookup at run time either even if the pack is large.

Right now an index version 2 is created only when the biggest offset in a
pack reaches 31 bits.  It might be a good idea to always use index version
2 eventually to benefit from the CRC32 it contains when reusing pack data
while repacking.

[jc: with the "oops" fix to keep track of the last offset correctly]

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-10 12:48:14 -07:00
Nicolas Pitre
ee5743ce19 compute object CRC32 with index-pack
Same as previous patch but for index-pack.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-10 12:48:14 -07:00
Nicolas Pitre
78d1e84fe5 compute a CRC32 for each object as stored in a pack
The most important optimization for performance when repacking is the
ability to reuse data from a previous pack as is and bypass any delta
or even SHA1 computation by simply copying the raw data from one pack
to another directly.

The problem with  this is that any data corruption within a copied object
would go unnoticed and the new (repacked) pack would be self-consistent
with its own checksum despite containing a corrupted object.  This is a
real issue that already happened at least once in the past.

In some attempt to prevent this, we validate the copied data by inflating
it and making sure no error is signaled by zlib.  But this is still not
perfect as a significant portion of a pack content is made of object
headers and references to delta base objects which are not deflated and
therefore not validated when repacking actually making the pack data reuse
still not as safe as it could be.

Of course a full SHA1 validation could be performed, but that implies
full data inflating and delta replaying which is extremely costly, which
cost the data reuse optimization was designed to avoid in the first place.

So the best solution to this is simply to store a CRC32 of the raw pack
data for each object in the pack index.  This way any object in a pack can
be validated before being copied as is in another pack, including header
and any other non deflated data.

Why CRC32 instead of a faster checksum like Adler32?  Quoting Wikipedia:

   Jonathan Stone discovered in 2001 that Adler-32 has a weakness for very
   short messages. He wrote "Briefly, the problem is that, for very short
   packets, Adler32 is guaranteed to give poor coverage of the available
   bits. Don't take my word for it, ask Mark Adler. :-)" The problem is
   that sum A does not wrap for short messages. The maximum value of A for
   a 128-byte message is 32640, which is below the value 65521 used by the
   modulo operation. An extended explanation can be found in RFC 3309,
   which mandates the use of CRC32 instead of Adler-32 for SCTP, the
   Stream Control Transmission Protocol.

In the context of a GIT pack, we have lots of small objects, especially
deltas, which are likely to be quite small and in a size range for which
Adler32 is dimed not to be sufficient.  Another advantage of CRC32 is the
possibility for recovery from certain types of small corruptions like
single bit errors which are the most probable type of corruptions.

OK what this patch does is to compute the CRC32 of each object written to
a pack within pack-objects.  It is not written to the index yet and it is
obviously not validated when reusing pack data yet either.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-10 12:48:14 -07:00
Nicolas Pitre
d7dd02231f add overflow tests on pack offset variables
Change a few size and offset variables to more appropriate type, then
add overflow tests on those offsets.  This prevents any bad data to be
generated/processed if off_t happens to not be large enough to handle
some big packs.

Better be safe than sorry.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-10 12:48:14 -07:00
Nicolas Pitre
8723f21626 make overflow test on delta base offset work regardless of variable size
This patch introduces the MSB() macro to obtain the desired number of
most significant bits from a given variable independently of the variable
type.

It is then used to better implement the overflow test on the OBJ_OFS_DELTA
base offset variable with the property of always working correctly
regardless of the type/size of that variable.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-10 12:48:14 -07:00
Nicolas Pitre
57059091fa get rid of num_packed_objects()
The coming index format change doesn't allow for the number of objects
to be determined from the size of the index file directly.  Instead, Let's
initialize a field in the packed_git structure with the object count when
the index is validated since the count is always known at that point.

While at it let's reorder some struct packed_git fields to avoid padding
due to needed 64-bit alignment for some of them.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-10 12:48:14 -07:00
René Scharfe
8ff21b1a33 git-archive: make tar the default format
As noted by Junio, --format=tar should be assumed if no format
was specified.

Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-09 18:51:40 -07:00
Junio C Hamano
5bcbc7ff10 Merge branch 'jc/push'
* jc/push:
  git-push to multiple locations does not stop at the first failure
  git-push reports the URL after failing.
2007-04-08 23:54:47 -07:00
Junio C Hamano
58fe516bb5 Merge branch 'jc/merge-subtree'
* jc/merge-subtree:
  A new merge stragety 'subtree'.

It is safe to merge this early as this is a feature that user
explicitly needs to ask for and would not trigger otherwise.  A
known issue with the current implementation is that the subtree
matching heuristics is very stupid.  It could run ls-tree twice
and try to count intersection.

Giving it wider audience would help it to get improved by
motivated volunteers.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-08 23:54:17 -07:00
Junio C Hamano
27be481ffb Merge branch 'js/fetch-progress'
* js/fetch-progress:
  git-fetch: add --quiet
2007-04-08 23:27:22 -07:00
Junio C Hamano
d39d10d7fc Merge branch 'maint'
* maint:
  Add Documentation/cmd-list.made to .gitignore
  git-svn: fix log command to avoid infinite loop on long commit messages
  git-svn: dcommit/rebase confused by patches with git-svn-id: lines
  git-svn: bail out on incorrect command-line options
2007-04-08 23:20:43 -07:00
Junio C Hamano
24c64d6add Add Documentation/cmd-list.made to .gitignore
Noticed by Randal L. Schwartz.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-08 22:14:16 -07:00
Eric Wong
c16d08713e git-svn: fix log command to avoid infinite loop on long commit messages
This bug has been around since the the conversion to use the
Git.pm library back in October or November.  Eventually I'd like
"git rev-list/log" to have the option to not truncate overly
long messages.

Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-08 19:54:07 -07:00
Eric Wong
13c823fb52 git-svn: dcommit/rebase confused by patches with git-svn-id: lines
When patches are merged from another git-svn managed branch,
they will have the git-svn-id: metadata line in them (generated
by git-format-patch).

When doing rebase or dcommit via git-svn, this would cause
git-svn to find the wrong upstream branch.  We now verify
that the commit is consistent with the value in the .rev_db
file.

Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-08 19:53:54 -07:00
Eric Wong
512b620bd9 git-svn: bail out on incorrect command-line options
"git svn log" is the only command that needs the pass-through
option in Getopt::Long; otherwise we will bail out and let the
user know something is wrong.

Also, avoid printing out unaccepted mixed-case options (that
are reserved for the command-line) such as --useSvmProps
in the usage() function.

Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-08 19:53:42 -07:00
Junio C Hamano
8d1608b8bf Start 1.5.2 cycle by prepareing RelNotes for it.
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-07 23:59:32 -07:00
Junio C Hamano
640ee0d1cd Merge branch 'jc/read-tree-df' (early part)
* 'jc/read-tree-df' (early part):
  Fix switching to a branch with D/F when current branch has file D.
  Fix twoway_merge that passed d/f conflict marker to merged_entry().
  Fix read-tree --prefix=dir/.
  unpack-trees: get rid of *indpos parameter.
  unpack_trees.c: pass unpack_trees_options structure to keep_entry() as well.
  add_cache_entry(): removal of file foo does not conflict with foo/bar
2007-04-07 23:52:40 -07:00
Junio C Hamano
5838dffdcb Merge branch 'maint'
* maint:
  Prepare for 1.5.1.1
  cvsserver: small corrections to asciidoc documentation
2007-04-07 23:36:22 -07:00
Junio C Hamano
732bcf942c Prepare for 1.5.1.1
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-07 23:33:38 -07:00
Frank Lichtenheld
e94a4f6eed cvsserver: small corrections to asciidoc documentation
Fix a typo: s/Not/Note/

Some formating fixes: Use ` ` syntax for all filenames and
' ' syntax for all commandline switches.

Signed-off-by: Frank Lichtenheld <frank@lichtenheld.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-07 23:01:54 -07:00
Junio C Hamano
68faf68938 A new merge stragety 'subtree'.
This merge strategy largely piggy-backs on git-merge-recursive.
When merging trees A and B, if B corresponds to a subtree of A,
B is first adjusted to match the tree structure of A, instead of
reading the trees at the same level.  This adjustment is also
done to the common ancestor tree.

If you are pulling updates from git-gui repository into git.git
repository, the root level of the former corresponds to git-gui/
subdirectory of the latter.  The tree object of git-gui's toplevel
is wrapped in a fake tree object, whose sole entry has name 'git-gui'
and records object name of the true tree, before being used by
the 3-way merge code.

If you are merging the other way, only the git-gui/ subtree of
git.git is extracted and merged into git-gui's toplevel.

The detection of corresponding subtree is done by comparing the
pathnames and types in the toplevel of the tree.

Heuristics galore!  That's the git way ;-).

Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-07 02:29:40 -07:00
Junio C Hamano
fd1d1b05e9 git-push to multiple locations does not stop at the first failure
When pushing into multiple repositories with git push, via
multiple URL in .git/remotes/$shorthand or multiple url
variables in [remote "$shorthand"] section, we used to stop upon
the first failure.  Continue the operation and report the
failure at the end.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-07 02:27:31 -07:00
Junio C Hamano
39878b0cb7 git-push reports the URL after failing.
This came up on #git when somebody was getting 'unable to create
./objects/tmp_oXXXX' but sweared he had write permission to that
directory.  It turned out that the repository URL was changed
and he was accessing a repository he does not have a write
permission anymore.

I am not sure how much this would have helped somebody who
believed he was accessing location when the permission of that
location was changed while he was looking the other way, though.
But giving more information on the error path would be better,
and the next change would be helped with this as well.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-07 02:27:31 -07:00
Junio C Hamano
ee9693e246 Merge branch 'jc/index-output'
* jc/index-output:
  git-read-tree --index-output=<file>
  _GIT_INDEX_OUTPUT: allow plumbing to output to an alternative index file.

Conflicts:

	builtin-apply.c
2007-04-07 02:26:24 -07:00
Junio C Hamano
39415449ee Merge branch 'fp/make-j'
* fp/make-j:
  Makefile: Add '+' to QUIET_SUBDIR0 to fix parallel make.
2007-04-07 02:20:47 -07:00
Junio C Hamano
5bba1b355e Merge branch 'cc/bisect'
* cc/bisect:
  git-bisect: allow bisecting with only one bad commit.
  t6030: add a bit more tests to git-bisect
  git-bisect: modernization
  Documentation: bisect: "start" accepts one bad and many good commits
  Bisect: teach "bisect start" to optionally use one bad and many good revs.
2007-04-07 02:20:39 -07:00
Junio C Hamano
b7108a16a6 Merge branch 'jc/checkout' (early part)
* 'jc/checkout' (early part):
  checkout: allow detaching to HEAD even when switching to the tip of a branch
2007-04-07 02:19:54 -07:00
Junio C Hamano
ced38ea252 Merge branch 'maint'
* maint:
  Documentation: tighten dependency for git.{html,txt}
  Makefile: iconv() on Darwin has the old interface
  t5300-pack-object.sh: portability issue using /usr/bin/stat
  t3200-branch.sh: small language nit
  usermanual.txt: some capitalization nits
  Make builtin-branch.c handle the git config file
  rename_ref(): only print a warning when config-file update fails
  Distinguish branches by more than case in tests.
  Avoid composing too long "References" header.
  cvsimport: Improve formating consistency
  cvsimport: Reorder options in documentation for better understanding
  cvsimport: Improve usage error reporting
  cvsimport: Improve documentation of CVSROOT and CVS module determination
  cvsimport: sync usage lines with existing options

Conflicts:

	Documentation/Makefile
2007-04-07 01:30:43 -07:00
Junio C Hamano
d79073922f Documentation: tighten dependency for git.{html,txt}
Every time _any_ documentation page changed, cmds-*.txt files
were regenerated, which caused git.{html,txt} to be remade.  Try
not to update cmds-*.txt files if their new contents match the
old ones.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-06 21:29:16 -07:00
Arjen Laarhoven
63b4b7a7ed Makefile: iconv() on Darwin has the old interface
The libiconv on Darwin uses the old iconv() interface (2nd argument is a
const char **, instead of a char **).  Add OLD_ICONV to the Darwin
variable definitions to handle this.

Signed-off-by: Arjen Laarhoven <arjen@yaph.org>
Acked-by: Brian Gernhardt <benji@silverinsanity.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-06 21:29:15 -07:00
Arjen Laarhoven
d93f7c1817 t5300-pack-object.sh: portability issue using /usr/bin/stat
In the test 'compare delta flavors', /usr/bin/stat is used to get file size.
This isn't portable.  There already is a dependency on Perl, use its '-s'
operator to get the file size.

Signed-off-by: Arjen Laarhoven <arjen@yaph.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-06 21:19:28 -07:00