On ARM I have the following compilation errors:
CC fast-import.o
In file included from cache.h:8,
from builtin.h:6,
from fast-import.c:142:
arm/sha1.h:14: error: conflicting types for 'SHA_CTX'
/usr/include/openssl/sha.h:105: error: previous declaration of 'SHA_CTX' was here
arm/sha1.h:16: error: conflicting types for 'SHA1_Init'
/usr/include/openssl/sha.h:115: error: previous declaration of 'SHA1_Init' was here
arm/sha1.h:17: error: conflicting types for 'SHA1_Update'
/usr/include/openssl/sha.h:116: error: previous declaration of 'SHA1_Update' was here
arm/sha1.h:18: error: conflicting types for 'SHA1_Final'
/usr/include/openssl/sha.h:117: error: previous declaration of 'SHA1_Final' was here
make: *** [fast-import.o] Error 1
This is because openssl header files are always included in
git-compat-util.h since commit 684ec6c63c whenever NO_OPENSSL is not
set, which somehow brings in <openssl/sha1.h> clashing with the custom
ARM version. Compilation of git is probably broken on PPC too for the
same reason.
Turns out that the only file requiring openssl/ssl.h and openssl/err.h
is imap-send.c. But only moving those problematic includes there
doesn't solve the issue as it also includes cache.h which brings in the
conflicting local SHA1 header file.
As suggested by Jeff King, the best solution is to rename our references
to SHA1 functions and structure to something git specific, and define those
according to the implementation used.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* jc/alternate-push:
push: receiver end advertises refs from alternate repositories
push: prepare sender to receive extended ref information from the receiver
receive-pack: make it a builtin
is_directory(): a generic helper function
* maint:
sha1_file: link() returns -1 on failure, not errno
Make git archive respect core.autocrlf when creating zip format archives
Add new test to demonstrate git archive core.autocrlf inconsistency
gitweb: avoid warnings for commits without body
Clarified gitattributes documentation regarding custom hunk header.
git-svn: fix handling of even funkier branch names
git-svn: Always create a new RA when calling do_switch for svn://
git-svn: factor out svnserve test code for later use
diff/diff-files: do not use --cc too aggressively
5723fe7 (Avoid cross-directory renames and linking on object creation,
2008-06-14) changed the call to use link() directly instead of through a
custom wrapper, but forgot that it returns 0 or -1, not 0 or errno.
Signed-off-by: Thomas Rast <trast@student.ethz.ch>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Earlier, when pushing into a repository that borrows from alternate object
stores, we followed the longstanding design decision not to trust refs in
the alternate repository that houses the object store we are borrowing
from. If your public repository is borrowing from Linus's public
repository, you pushed into it long time ago, and now when you try to push
your updated history that is in sync with more recent history from Linus,
you will end up sending not just your own development, but also the
changes you acquired through Linus's tree, even though the objects needed
for the latter already exists at the receiving end. This is because the
receiving end does not advertise that the objects only reachable from the
borrowed repository (i.e. Linus's) are already available there.
This solves the issue by making the receiving end advertise refs from
borrowed repositories. They are not sent with their true names but with a
phoney name ".have" to make sure that the old senders will safely ignore
them (otherwise, the old senders will misbehave, trying to push matching
refs, and mirror push that deletes refs that only exist at the receiving
end).
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A simple "grep -e stat --and -e S_ISDIR" revealed there are many
open-coded implementations of this function.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We used to allow callers to pass "foo/bar/" to make sure both "foo" and
"foo/bar" exist and have good permissions, but this interface is too error
prone. If a caller mistakenly passes a path with trailing slashes
(perhaps it forgot to verify the user input) even when it wants to later
mkdir "bar" itself, it will find that it cannot mkdir "bar". If such a
caller does not bother to check the error for EEXIST, it may even
errorneously die().
Because we have no existing callers to use that obscure feature, this
patch removes it to avoid confusion.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is needed to fix verify-pack -v with multiple pack arguments.
Also, in theory, revindex data (if any) must be discarded whenever
reprepare_packed_git() is called. In practice this is hard to trigger
though.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* dp/hash-literally:
add --no-filters option to git hash-object
add --path option to git hash-object
use parse_options() in git hash-object
correct usage help string for git-hash-object
correct argument checking test for git hash-object
teach index_fd to work with pipes
When dealing with a repository with lots of loose objects, sha1_object_info
would rescan the packs directory every time an unpacked object was referenced
before finally giving up and looking for the loose object. This caused a lot
of extra unnecessary system calls during git pack-objects; the code was
rereading the entire pack directory once for each loose object file.
This patch looks for a loose object before falling back to rescanning the
pack directory, rather than the other way around.
Signed-off-by: Steven Grimm <koreth@midwinter.com>
Acked-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
index_fd can now work with file descriptors that are not normal files
but any readable file. If the given file descriptor is a regular file
then mmap() is used; for other files, strbuf_read is used.
The path parameter, which has been used as hint for filters, can be
NULL now to indicate that the file should be hashed literally without
any filter.
The index_pipe function is removed as redundant.
Signed-off-by: Dmitry Potapov <dpotapov@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since commit 8eca0b47ff, it is possible
for read_sha1_file() to return NULL even with existing objects when they
are corrupted. Previously a corrupted object would have terminated the
program immediately, effectively making read_sha1_file() return NULL
only when specified object is not found.
Let's restore this behavior for all users of read_sha1_file() and
provide a separate function with the ability to not terminate when
bad objects are encountered.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When recursing to unpack a delta base we must unuse_pack() so that
the pack window for the current object does not remain pinned in
memory while the delta base is itself being unpacked and materialized
for our use.
On a long delta chain of 50 objects we may need to access 6 different
windows from a very large (>3G) pack file in order to obtain all
of the delta base content. If the process ulimit permits us to
map/allocate only 1.5G we must release windows during this recursion
to ensure we stay within the ulimit and transition memory from pack
cache to standard malloc, or other mmap needs.
Inserting an unuse_pack() call prior to the recursion allows us to
avoid pinning the current window, making it available for garbage
collection if memory runs low.
This has been broken since at least before 1.5.1-rc1, and very
likely earlier than that. Its fixed now. :)
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When printing valuds of type uint32_t, we should use PRIu32, and should
not assume that it is unsigned int. On 32-bit platforms, it could be
defined as unsigned long. The same caution applies to ntohl().
Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* j6t/mingw: (38 commits)
compat/pread.c: Add a forward declaration to fix a warning
Windows: Fix ntohl() related warnings about printf formatting
Windows: TMP and TEMP environment variables specify a temporary directory.
Windows: Make 'git help -a' work.
Windows: Work around an oddity when a pipe with no reader is written to.
Windows: Make the pager work.
When installing, be prepared that template_dir may be relative.
Windows: Use a relative default template_dir and ETC_GITCONFIG
Windows: Compute the fallback for exec_path from the program invocation.
Turn builtin_exec_path into a function.
Windows: Use a customized struct stat that also has the st_blocks member.
Windows: Add a custom implementation for utime().
Windows: Add a new lstat and fstat implementation based on Win32 API.
Windows: Implement a custom spawnve().
Windows: Implement wrappers for gethostbyname(), socket(), and connect().
Windows: Work around incompatible sort and find.
Windows: Implement asynchronous functions as threads.
Windows: Disambiguate DOS style paths from SSH URLs.
Windows: A rudimentary poll() emulation.
Windows: Implement start_command().
...
* lt/config-fsync:
Add config option to enable 'fsync()' of object files
Split up default "i18n" and "branch" config parsing into helper routines
Split up default "user" config parsing into helper routine
Split up default "core" config parsing into helper routine
The shell version used to use "mkdir -p" to create the repo
path, but the C version just calls "mkdir". Let's replicate
the old behavior. We have to create the git and worktree
leading dirs separately; while most of the time, the
worktree dir contains the git dir (as .git), the user can
override this using GIT_WORK_TREE.
We can reuse safe_create_leading_directories, but we need to
make a copy of our const buffer to do so. Since
merge-recursive uses the same pattern, we can factor this
out into a global function. This has two other cleanup
advantages for merge-recursive:
1. mkdir_p wasn't a very good name. "mkdir -p foo/bar" actually
creates bar, but this function just creates the leading
directories.
2. mkdir_p took a mode argument, but it was completely
ignored.
Acked-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Using find_pack_entry_one() to get object offsets is rather suboptimal
when nth_packed_object_offset() can be used directly.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The shell version used to use "mkdir -p" to create the repo
path, but the C version just calls "mkdir". Let's replicate
the old behavior. We have to create the git and worktree
leading dirs separately; while most of the time, the
worktree dir contains the git dir (as .git), the user can
override this using GIT_WORK_TREE.
We can reuse safe_create_leading_directories, but we need to
make a copy of our const buffer to do so. Since
merge-recursive uses the same pattern, we can factor this
out into a global function. This has two other cleanup
advantages for merge-recursive:
1. mkdir_p wasn't a very good name. "mkdir -p foo/bar" actually
creates bar, but this function just creates the leading
directories.
2. mkdir_p took a mode argument, but it was completely
ignored.
Acked-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
New pack structures are currently allocated in 2 different places
and all members have to be initialized explicitly. This is prone
to errors leading to segmentation faults as found by Teemu Likonen.
Let's have a common place where this structure is allocated, and have
all members explicitly initialized to zero.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We should be able to fall back to loose objects or alternative packs when
a pack becomes corrupted. This is especially true when an object exists
in one pack only as a delta but its base object is corrupted. Currently
there is no way to retrieve the former object even if the later is
available in another pack or loose.
This patch allows for a delta to be resolved (with a performance cost)
using a base object from a source other than the pack where that delta
is located. Same thing for non-delta objects: rather than failing
outright, a search is made in other packs or used loose when the
currently active pack has it but corrupted.
Of course git will become extremely noisy with error messages when that
happens. However, if the operation succeeds nevertheless, a simple
'git repack -a -f -d' will "fix" the corrupted repository given that all
corrupted objects have a good duplicate somewhere in the object store,
possibly manually copied from another source.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The AIX mkstemp will modify it's template parameter to an empty string if
the call fails. This caused a subsequent mkdir to fail.
Signed-off-by: Patrick Higgins <patrick.higgins@cexp.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In this function we must be careful to handle drive-local paths else there
is a danger that it runs into an infinite loop.
Signed-off-by: Johannes Sixt <johannes.sixt@telecom.at>
As explained in the documentation[*] this is totally useless on
filesystems that do ordered/journalled data writes, but it can be a
useful safety feature on filesystems like HFS+ that only journal the
metadata, not the actual file contents.
It defaults to off, although we could presumably in theory some day
auto-enable it on a per-filesystem basis.
[*] Yes, I updated the docs for the thing. Hell really _has_ frozen
over, and the four horsemen are probably just beyond the horizon.
EVERYBODY PANIC!
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It was implemented as a thin wrapper around an otherwise unused
helper function parse_pack_index_file(). The code becomes simpler
and easier to read by consolidating the two.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In a shared repository, we should make sure adjust_shared_perm() is called
after creating the initial fan-out directories under objects/ directory.
Earlier an logico called the function only when mkdir() failed; we should
do so when mkdir() succeeded.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Before even calling this, all callers have done a "has_sha1_file(sha1)"
or "has_loose_object(sha1)" check, so there is no point in doing a
second check.
If something races with us on object creation, we handle that in the
final link() that moves it to the right place.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Now that we've made the loose SHA1 file reading more careful and
streamlined, we only use the old find_sha1_file() function for checking
whether a loose object file exists at all.
As such, the whole 'return stat information' part of it was just
pointless (nobody cares any more), and the naming of the function is not
really all that relevant either.
So simplify it to not do a 'stat()', but just an existence check (which
is what the callers want), and rename it to 'has_loose_object()' which
matches the use.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We used to do 'stat()+open()+mmap()+close()' to read the loose object
file data, which does work fine, but has a couple of problems:
- it unnecessarily walks the filename twice (at 'stat()' time and then
again to open it)
- NFS generally has open-close consistency guarantees, which means that
the initial 'stat()' was technically done outside of the normal
consistency rules.
So change it to do 'open()+fstat()+mmap()+close()' instead, which avoids
both these issues.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Instead of creating new temporary objects in the top-level git object
directory, create them in the same directory they will finally end up in
anyway. This avoids making the final atomic "rename to stable name"
operation be a cross-directory event, which makes it a lot easier for
various filesystems.
Several filesystems do things like change the inode number when moving
files across directories (or refuse to do it entirely).
In particular, it can also cause problems for NFS implementations that
change the filehandle of a file when it moves to a different directory,
like the old user-space NFS server did, and like the Linux knfsd still
does if you don't export your filesystems with 'no_subtree_check' or if
you export a filesystem that doesn't have stable inode numbers across
renames).
This change also obviously implies creating the object fan-out
subdirectory at tempfile creation time, rather than at the final
move_temp_to_file() time. Which actually accounts for most of the size
of the patch.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
write_sha1_from_fd() and write_sha1_to_fd() were dead code nobody called,
neither the latter's helper repack_object() was.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This consolidates the common operations for closing the new temporary file
that we have written, before we move it into place with the final name.
There's some common code there (make it read-only and check for errors on
close), but more importantly, this also gives a single place to add an
fsync_or_die() call if we want to add a safe mode.
This was triggered due to Denis Bueno apparently twice being able to
corrupt his git repository on OS X due to an unlucky combination of kernel
crashes and a not-very-robust filesystem.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
An earlier commit 633f43e (Remove redundant code, eliminate one static
variable, 2008-05-24) had a thinko (perhaps an eyeno) that broke
sha1_pack_index_name() function. One symptom of this was that the http
walker is now completely broken.
This should fix it.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* db/clone-in-c:
Add test for cloning with "--reference" repo being a subset of source repo
Add a test for another combination of --reference
Test that --reference actually suppresses fetching referenced objects
clone: fall back to copying if hardlinking fails
builtin-clone.c: Need to closedir() in copy_or_link_directory()
builtin-clone: fix initial checkout
Build in clone
Provide API access to init_db()
Add a function to set a non-default work tree
Allow for having for_each_ref() list extra refs
Have a constant extern refspec for "--tags"
Add a library function to add an alternate to the alternates file
Add a lockfile function to append to a file
Mark the list of refs to fetch as const
Conflicts:
cache.h
t/t5700-clone-reference.sh
This is meant to force the creation of a loose object even if it
already exists packed. Needed for the next commit.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is in the core so that, if the alternates file has already been
read, the addition can be parsed and put into effect for the current
process.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Currently, when looking for a packed object from the pack idx, a
simple binary search is used.
A conventional binary search loop looks like this:
unsigned lo, hi;
do {
unsigned mi = (lo + hi) / 2;
int cmp = "entry pointed at by mi" minus "target";
if (!cmp)
return mi; "mi is the wanted one"
if (cmp > 0)
hi = mi; "mi is larger than target"
else
lo = mi+1; "mi is smaller than target"
} while (lo < hi);
"did not find what we wanted"
The invariants are:
- When entering the loop, 'lo' points at a slot that is never
above the target (it could be at the target), 'hi' points at
a slot that is guaranteed to be above the target (it can
never be at the target).
- We find a point 'mi' between 'lo' and 'hi' ('mi' could be
the same as 'lo', but never can be as high as 'hi'), and
check if 'mi' hits the target. There are three cases:
- if it is a hit, we have found what we are looking for;
- if it is strictly higher than the target, we set it to
'hi', and repeat the search.
- if it is strictly lower than the target, we update 'lo'
to one slot after it, because we allow 'lo' to be at the
target and 'mi' is known to be below the target.
If the loop exits, there is no matching entry.
When choosing 'mi', we do not have to take the "middle" but
anywhere in between 'lo' and 'hi', as long as lo <= mi < hi is
satisfied. When we somehow know that the distance between the
target and 'lo' is much shorter than the target and 'hi', we
could pick 'mi' that is much closer to 'lo' than (hi+lo)/2,
which a conventional binary search would pick.
This patch takes advantage of the fact that the SHA-1 is a good
hash function, and as long as there are enough entries in the
table, we can expect uniform distribution. An entry that begins
with for example "deadbeef..." is much likely to appear much
later than in the midway of a reasonably populated table. In
fact, it can be expected to be near 87% (222/256) from the top
of the table.
This is a work-in-progress and has switches to allow easier
experiments and debugging. Exporting GIT_USE_LOOKUP environment
variable enables this code.
On my admittedly memory starved machine, with a partial KDE
repository (3.0G pack with 95M idx):
$ GIT_USE_LOOKUP=t git log -800 --stat HEAD >/dev/null
3.93user 0.16system 0:04.09elapsed 100%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+55588minor)pagefaults 0swaps
Without the patch, the numbers are:
$ git log -800 --stat HEAD >/dev/null
4.00user 0.15system 0:04.17elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+60258minor)pagefaults 0swaps
In the same repository:
$ GIT_USE_LOOKUP=t git log -2000 HEAD >/dev/null
0.12user 0.00system 0:00.12elapsed 97%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+4241minor)pagefaults 0swaps
Without the patch, the numbers are:
$ git log -2000 HEAD >/dev/null
0.05user 0.01system 0:00.07elapsed 100%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+8506minor)pagefaults 0swaps
There isn't much time difference, but the number of minor faults
seems to show that we are touching much smaller number of pages,
which is expected.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since commit eb32d236df, there was a TODO
comment in packed_object_info_detail() about the SHA1 of base object to
OBJ_OFS_DELTA objects. So here it is at last.
While at it, providing the actual storage size information as well is now
trivial.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* mk/maint-parse-careful:
peel_onion: handle NULL
check return value from parse_commit() in various functions
parse_commit: don't fail, if object is NULL
revision.c: handle tag->tagged == NULL
reachable.c::process_tree/blob: check for NULL
process_tag: handle tag->tagged == NULL
check results of parse_commit in merge_bases
list-objects.c::process_tree/blob: check for NULL
reachable.c::add_one_tree: handle NULL from lookup_tree
mark_blob/tree_uninteresting: check for NULL
get_sha1_oneline: check return value of parse_object
read_object_with_reference: don't read beyond the buffer
Now any commands may reference the empty tree object by its
sha1 (4b825dc642). This is
useful for showing some diffs, especially for initial
commits.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
CRLF conversion bears a slight chance of corrupting data.
autocrlf=true will convert CRLF to LF during commit and LF to
CRLF during checkout. A file that contains a mixture of LF and
CRLF before the commit cannot be recreated by git. For text
files this is the right thing to do: it corrects line endings
such that we have only LF line endings in the repository.
But for binary files that are accidentally classified as text the
conversion can corrupt data.
If you recognize such corruption early you can easily fix it by
setting the conversion type explicitly in .gitattributes. Right
after committing you still have the original file in your work
tree and this file is not yet corrupted. You can explicitly tell
git that this file is binary and git will handle the file
appropriately.
Unfortunately, the desired effect of cleaning up text files with
mixed line endings and the undesired effect of corrupting binary
files cannot be distinguished. In both cases CRLFs are removed
in an irreversible way. For text files this is the right thing
to do because CRLFs are line endings, while for binary files
converting CRLFs corrupts data.
This patch adds a mechanism that can either warn the user about
an irreversible conversion or can even refuse to convert. The
mechanism is controlled by the variable core.safecrlf, with the
following values:
- false: disable safecrlf mechanism
- warn: warn about irreversible conversions
- true: refuse irreversible conversions
The default is to warn. Users are only affected by this default
if core.autocrlf is set. But the current default of git is to
leave core.autocrlf unset, so users will not see warnings unless
they deliberately chose to activate the autocrlf mechanism.
The safecrlf mechanism's details depend on the git command. The
general principles when safecrlf is active (not false) are:
- we warn/error out if files in the work tree can modified in an
irreversible way without giving the user a chance to backup the
original file.
- for read-only operations that do not modify files in the work tree
we do not not print annoying warnings.
There are exceptions. Even though...
- "git add" itself does not touch the files in the work tree, the
next checkout would, so the safety triggers;
- "git apply" to update a text file with a patch does touch the files
in the work tree, but the operation is about text files and CRLF
conversion is about fixing the line ending inconsistencies, so the
safety does not trigger;
- "git diff" itself does not touch the files in the work tree, it is
often run to inspect the changes you intend to next "git add". To
catch potential problems early, safety triggers.
The concept of a safety check was originally proposed in a similar
way by Linus Torvalds. Thanks to Dimitry Potapov for insisting
on getting the naked LF/autocrlf=true case right.
Signed-off-by: Steffen Prohaska <prohaska@zib.de>
fast-import was relying on the fact that on most systems mmap() and
write() are synchronized by the filesystem's buffer cache. We were
relying on the ability to mmap() 20 bytes beyond the current end
of the file, then later fill in those bytes with a future write()
call, then read them through the previously obtained mmap() address.
This isn't always true with some implementations of NFS, but it is
especially not true with our NO_MMAP=YesPlease build time option used
on some platforms. If fast-import was built with NO_MMAP=YesPlease
we used the malloc()+pread() emulation and the subsequent write()
call does not update the trailing 20 bytes of a previously obtained
"mmap()" (aka malloc'd) address.
Under NO_MMAP that behavior causes unpack_entry() in sha1_file.c to
be unable to read an object header (or data) that has been unlucky
enough to be written to the packfile at a location such that it
is in the trailing 20 bytes of a window previously opened on that
same packfile.
This bug has gone unnoticed for a very long time as it is highly data
dependent. Not only does the object have to be placed at the right
position, but it also needs to be positioned behind some other object
that has been accessed due to a branch cache invalidation. In other
words the stars had to align just right, and if you did run into
this bug you probably should also have purchased a lottery ticket.
Fortunately the workaround is a lot easier than the bug explanation.
Before we allow unpack_entry() to read data from a pack window
that has also (possibly) been modified through write() we force
all existing windows on that packfile to be closed. By closing
the windows we ensure that any new access via the emulated mmap()
will reread the packfile, updating to the current file content.
This comes at a slight performance degredation as we cannot reuse
previously cached windows when we update the packfile. But it
is a fairly minor difference as the window closes happen at only
two points:
- When the packfile is finalized and its .idx is generated:
At this stage we are getting ready to update the refs and any
data access into the packfile is going to be random, and is
going after only the branch tips (to ensure they are valid).
Our existing windows (if any) are not likely to be positioned
at useful locations to access those final tip commits so we
probably were closing them before anyway.
- When the branch cache missed and we need to reload:
At this point fast-import is getting change commands for the next
commit and it needs to go re-read a tree object it previously
had written out to the packfile. What windows we had (if any)
are not likely to cover the tree in question so we probably were
closing them before anyway.
We do try to avoid unnecessarily closing windows in the second case
by checking to see if the packfile size has increased since the
last time we called unpack_entry() on that packfile. If the size
has not changed then we have not written additional data, and any
existing window is still vaild. This nicely handles the cases where
fast-import is going through a branch cache reload and needs to read
many trees at once. During such an event we are not likely to be
updating the packfile so we do not cycle the windows between reads.
With this change in place t9301-fast-export.sh (which was broken
by c3b0dec509) finally works again.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The old way of fixing warnings did not succeed on MinGW. MinGW
does not support C99 printf format strings for size_t [1]. But
gcc on MinGW issues warnings if C99 printf format is not used.
Hence, the old stragegy to avoid warnings fails.
[1] http://www.mingw.org/MinGWiki/index.php/C99
This commits passes arguments of type size_t through a tiny
helper functions that casts to the type expected by the format
string.
Signed-off-by: Steffen Prohaska <prohaska@zib.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
There are some places that test for an absolute path. Use the helper
function to ease porting.
Signed-off-by: Johannes Sixt <johannes.sixt@telecom.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* maint:
RelNotes-1.5.3.5: describe recent fixes
merge-recursive.c: mrtree in merge() is not used before set
sha1_file.c: avoid gcc signed overflow warnings
Fix a small memory leak in builtin-add
honor the http.sslVerify option in shell scripts
With the recent gcc, we get:
sha1_file.c: In check_packed_git_:
sha1_file.c:527: warning: assuming signed overflow does not
occur when assuming that (X + c) < X is always false
sha1_file.c:527: warning: assuming signed overflow does not
occur when assuming that (X + c) < X is always false
for a piece of code that tries to make sure that off_t is large
enough to hold more than 2^32 offset. The test tried to make
sure these do not wrap-around:
/* make sure we can deal with large pack offsets */
off_t x = 0x7fffffffUL, y = 0xffffffffUL;
if (x > (x + 1) || y > (y + 1)) {
but gcc assumes it can do whatever optimization it wants for a
signed overflow (undefined behaviour) and warns about this
construct.
Follow Linus's suggestion to check sizeof(off_t) instead to work
around the problem.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* ph/strbuf: (44 commits)
Make read_patch_file work on a strbuf.
strbuf_read_file enhancement, and use it.
strbuf change: be sure ->buf is never ever NULL.
double free in builtin-update-index.c
Clean up stripspace a bit, use strbuf even more.
Add strbuf_read_file().
rerere: Fix use of an empty strbuf.buf
Small cache_tree_write refactor.
Make builtin-rerere use of strbuf nicer and more efficient.
Add strbuf_cmp.
strbuf_setlen(): do not barf on setting length of an empty buffer to 0
sq_quote_argv and add_to_string rework with strbuf's.
Full rework of quote_c_style and write_name_quoted.
Rework unquote_c_style to work on a strbuf.
strbuf API additions and enhancements.
nfv?asprintf are broken without va_copy, workaround them.
Fix the expansion pattern of the pseudo-static path buffer.
builtin-for-each-ref.c::copy_name() - do not overstep the buffer.
builtin-apply.c: fix a tiny leak introduced during xmemdupz() conversion.
Use xmemdupz() in many places.
...
For that purpose, the ->buf is always initialized with a char * buf living
in the strbuf module. It is made a char * so that we can sloppily accept
things that perform: sb->buf[0] = '\0', and because you can't pass "" as an
initializer for ->buf without making gcc unhappy for very good reasons.
strbuf_init/_detach/_grow have been fixed to trust ->alloc and not ->buf
anymore.
as a consequence strbuf_detach is _mandatory_ to detach a buffer, copying
->buf isn't an option anymore, if ->buf is going to escape from the scope,
and eventually be free'd.
API changes:
* strbuf_setlen now always works, so just make strbuf_reset a convenience
macro.
* strbuf_detatch takes a size_t* optional argument (meaning it can be
NULL) to copy the buffer's len, as it was needed for this refactor to
make the code more readable, and working like the callers.
Signed-off-by: Pierre Habouzit <madcoder@debian.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Pierre Habouzit <madcoder@debian.org>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Now, those functions take an "out" strbuf argument, where they store their
result if any. In that case, it also returns 1, else it returns 0.
* those functions support "in place" editing, in the sense that it's OK to
call them this way:
convert_to_git(path, sb->buf, sb->len, sb);
When doable, conversions are done in place for real, else the strbuf
content is just replaced with the new one, transparentely for the caller.
If you want to create a new filter working this way, being the accumulation
of filter1, filter2, ... filtern, then your meta_filter would be:
int meta_filter(..., const char *src, size_t len, struct strbuf *sb)
{
int ret = 0;
ret |= filter1(...., src, len, sb);
if (ret) {
src = sb->buf;
len = sb->len;
}
ret |= filter2(...., src, len, sb);
if (ret) {
src = sb->buf;
len = sb->len;
}
....
return ret | filtern(..., src, len, sb);
}
That's why subfilters the convert_to_* functions called were also rewritten
to work this way.
Signed-off-by: Pierre Habouzit <madcoder@debian.org>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This brings builtin-stripspace, builtin-tag and mktag to use strbufs.
Signed-off-by: Pierre Habouzit <madcoder@debian.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Under some types of packfile corruption the zlib stream holding the
data for a delta within a packfile may fail to inflate, due to say
a CRC failure within the compressed data itself. When this occurs
the unpack_compressed_entry function will return NULL as a signal to
the caller that the data is not available. Unfortunately we then
tried to use that NULL as though it referenced a memory location
where a delta was stored and tried to apply it to the delta base.
Loading a byte from the NULL address typically causes a SIGSEGV.
cate on #git noticed this failure in `git fsck --full` where the
call to verify_pack() first noticed that the packfile was corrupt
by finding that the packfile's SHA-1 did not match the raw data of
the file. After finding this fsck went ahead and tried to verify
every object within the packfile, even though the packfile was
already known to be bad. If we are going to shovel bad data at
the delta unpacking code, we better handle it correctly.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Print the index version when an error occurs so the user
knows what type of header (and size) we thought the index
should have had.
Signed-off-by: Luiz Fernando N. Capitulino <lcapitulino@mandriva.com.br>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The new name is closer to the purpose of the function.
A NUL-terminated buffer makes things easier when callers need that.
Since the function returns only the memory written with data,
almost always allocating more space than needed because final
size is unknown, an extra NUL terminating the buffer is harmless.
It is not included in the returned size, so the function
remains working as before.
Also, now the function allows the buffer passed to be NULL at first,
and alloc_nr is now used for growing the buffer, instead size=*2.
Signed-off-by: Carlos Rica <jasampler@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* maint:
Document -<n> for git-format-patch
glossary: add 'reflog'
diff --no-index: fix --name-status with added files
Don't smash stack when $GIT_ALTERNATE_OBJECT_DIRECTORIES is too long
There is no restriction on the length of the name returned by
get_object_directory, other than the fact that it must be a stat'able
git object directory. That means its name may have length up to
PATH_MAX-1 (i.e., often 4095) not counting the trailing NUL.
Combine that with the assumption that the concatenation of that name and
suffixes like "/info/alternates" and "/pack/---long-name---.idx" will fit
in a buffer of length PATH_MAX, and you see the problem. Here's a fix:
sha1_file.c (prepare_packed_git_one): Lengthen "path" buffer
so we are guaranteed to be able to append "/pack/" without checking.
Skip any directory entry that is too long to be appended.
(read_info_alternates): Protect against a similar buffer overrun.
Before this change, using the following admittedly contrived environment
setting would cause many git commands to clobber their stack and segfault
on a system with PATH_MAX == 4096:
t=$(perl -e '$s=".git/objects";$n=(4096-6-length($s))/2;print "./"x$n . $s')
export GIT_ALTERNATE_OBJECT_DIRECTORIES=$t
touch g
./git-update-index --add g
If you run the above commands, you'll soon notice that many
git commands now segfault, so you'll want to do this:
unset GIT_ALTERNATE_OBJECT_DIRECTORIES
Signed-off-by: Jim Meyering <jim@meyering.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A pack-file can get created without any objects in it (to transfer "no
data" - which can happen if you use a reference git repo, for example,
or just otherwise just end up transferring only branch head information
and already have all the objects themselves).
And while we probably should never create an index for such a pack, if we
do (and we do), the index file size sanity checking was incorrect.
This fixes it.
Reported-and-tested-by: Jocke Tjernlund <tjernlund@tjernlund.se>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This uses "git-apply --whitespace=strip" to fix whitespace errors that have
crept in to our source files over time. There are a few files that need
to have trailing whitespaces (most notably, test vectors). The results
still passes the test, and build result in Documentation/ area is unchanged.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* sp/pack:
Style nit - don't put space after function names
Ensure the pack index is opened before access
Simplify index access condition in count-objects, pack-redundant
Test for recent rev-parse $abbrev_sha1 regression
rev-parse: Identify short sha1 sums correctly.
Attempt to delay prepare_alt_odb during get_sha1
Micro-optimize prepare_alt_odb
Lazily open pack index files on demand
* maint:
git-config: Improve documentation of git-config file handling
git-config: Various small fixes to asciidoc documentation
decode_85(): fix missing return.
fix signed range problems with hex conversions
* maint-1.5.1:
git-config: Improve documentation of git-config file handling
git-config: Various small fixes to asciidoc documentation
decode_85(): fix missing return.
fix signed range problems with hex conversions
Jon Smirl said:
| Once an object reference hits a pack file it is very likely that
| following references will hit the same pack file. So first place to
| look for an object is the same place the previous object was found.
This is indeed a good heuristic so here it is. The search always start
with the pack where the last object lookup succeeded. If the wanted
object is not available there then the search continues with the normal
pack ordering.
To test this I split the Linux repository into 66 packs and performed a
"time git-rev-list --objects --all > /dev/null". Best results are as
follows:
Pack Sort w/o this patch w/ this patch
-------------------------------------------------------------
recent objects last 26.4s 20.9s
recent objects first 24.9s 18.4s
This shows that the pack order based on object age has some influence,
but that the last-used-pack heuristic is even more significant in
reducing object lookup.
Signed-off-by: Nicolas Pitre <nico@cam.org> --- Note: the
--max-pack-size to git-repack currently produces packs with old objects
after those containing recent objects. The pack sort based on
filesystem timestamp is therefore backward for those. This needs to be
fixed of course, but at least it made me think about this variable for
the test.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Make hexval_table[] "const". Also make sure that the accessor
function hexval() does not access the table with out-of-range
values by declaring its parameter "unsigned char", instead of
"unsigned int".
With this, gcc can just generate:
movzbl (%rdi), %eax
movsbl hexval_table(%rax),%edx
movzbl 1(%rdi), %eax
movsbl hexval_table(%rax),%eax
sall $4, %edx
orl %eax, %edx
for the code to generate a byte from two hex characters.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Our style is to not put a space after a function name. I did here,
and Junio applied the patch with the incorrect formatting. So I'm
cleaning up after myself since I noticed it upon review.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Calling getenv() is not that expensive, but its also not free,
and its certainly not cheaper than testing to see if alt_odb_tail
is not null.
Because we are calling prepare_alt_odb() from within find_sha1_file
every time we cannot find an object file locally we want to skip out
of prepare_alt_odb() as early as possible once we have initialized
our alternate list.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
In some repository configurations the user may have many packfiles,
but all of the recent commits/trees/tags/blobs are likely to
be in the most recent packfile (the one with the newest mtime).
It is therefore common to be able to complete an entire operation
by accessing only one packfile, even if there are 25 packfiles
available to the repository.
Rather than opening and mmaping the corresponding .idx file for
every pack found, we now only open and map the .idx when we suspect
there might be an object of interest in there.
Of course we cannot known in advance which packfile contains an
object, so we still need to scan the entire packed_git list to
locate anything. But odds are users want to access objects in the
most recently created packfiles first, and that may be all they
ever need for the current operation.
Junio observed in b867092f that placing recent packfiles before
older ones can slightly improve access times for recent objects,
without degrading it for historical object access.
This change improves upon Junio's observations by trying even harder
to avoid the .idx files that we won't need.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
* np/pack:
deprecate the new loose object header format
make "repack -f" imply "pack-objects --no-reuse-object"
allow for undeltified objects not to be reused
This patch fixes all calls to xread() where the return value is not
stored into an ssize_t. The patch should not have any effect whatsoever,
other than putting better/more appropriate type names on variables.
Signed-off-by: Johan Herland <johan@herland.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Now that we encourage and actively preserve objects in a packed form
more agressively than we did at the time the new loose object format and
core.legacyheaders were introduced, that extra loose object format
doesn't appear to be worth it anymore.
Because the packing of loose objects has to go through the delta match
loop anyway, and since most of them should end up being deltified in
most cases, there is really little advantage to have this parallel loose
object format as the CPU savings it might provide is rather lost in the
noise in the end.
This patch gets rid of core.legacyheaders, preserve the legacy format as
the only writable loose object format and deprecate the other one to
keep things simpler.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
* maint:
Start preparing for 1.5.1.3
Sanitize @to recipients.
git-svn: Ignore usernames in URLs in find_by_url
Document --dry-run and envelope-sender for git-send-email.
Allow users to optionally specify their envelope sender.
Ensure clean addresses are always used with Net::SMTP
Validate @recipients before using it for sendmail and Net::SMTP.
Perform correct quoting of recipient names.
Change the scope of the $cc variable as it is not needed outside of send_message.
Debugging cleanup improvements
Prefix Dry- to the message status to denote dry-runs.
Document --dry-run parameter to send-email.
git-svn: Don't rely on $_ after making a function call
Fix handle leak in write_tree
Actually handle some-low memory conditions
Conflicts:
RelNotes
git-send-email.perl
Tim Ansell discovered his Debian server didn't permit git-daemon to
use as much memory as it needed to handle cloning a project with
a 128 MiB packfile. Filtering the strace provided by Tim of the
rev-list child showed this gem of a sequence:
open("./objects/pack/pack-*.pack", O_RDONLY|O_LARGEFILE <unfinished ...>
<... open resumed> ) = 5
OK, so the packfile is fd 5...
mmap2(NULL, 33554432, PROT_READ, MAP_PRIVATE, 5, 0 <unfinished ...>
<... mmap2 resumed> ) = 0xb5e2d000
and we mapped one 32 MiB window from it at position 0...
mmap2(NULL, 31020635, PROT_READ, MAP_PRIVATE, 5, 0x6000 <unfinished ...>
<... mmap2 resumed> ) = -1 ENOMEM (Cannot allocate memory)
And we asked for another window further into the file. But got
denied. In Tim's case this was due to a resource limit on the
git-daemon process, and its children.
Now where are we in the code? We're down inside use_pack(),
after we have called unuse_one_window() enough times to make sure
we stay within our allowed maximum window size. However since we
didn't unmap the prior window at 0xb5e2d000 we aren't exceeding
the current limit (which probably was just the defaults).
But we're actually down inside xmmap()...
So we release the window we do have (by calling release_pack_memory),
assuming there is some memory pressure...
munmap(0xb5e2d000, 33554432 <unfinished ...>
<... munmap resumed> ) = 0
close(5 <unfinished ...>
<... close resumed> ) = 0
And that was the last window in this packfile. So we closed it.
Way to go us. Our xmmap did not expect release_pack_memory to
close the fd its about to map...
mmap2(NULL, 31020635, PROT_READ, MAP_PRIVATE, 5, 0x6000 <unfinished ...>
<... mmap2 resumed> ) = -1 EBADF (Bad file descriptor)
And so the Linux kernel happily tells us f' off.
write(2, "fatal: ", 7 <unfinished ...>
<... write resumed> ) = 7
write(2, "Out of memory? mmap failed: Bad "..., 47 <unfinished ...>
<... write resumed> ) = 47
And we report the bad file descriptor error, and not the ENOMEM,
and die, claiming we are out of memory. But actually that mmap
should have succeeded, as we had enough memory for that window,
seeing as how we released the prior one.
Originally when I developed the sliding window mmap feature I had
this exact same bug in fast-import, and I dealt with it by handing
in the struct packed_git* we want to open the new window for, as the
caller wasn't prepared to reopen the packfile if unuse_one_window
closed it. The same is true here from xmmap, but the caller doesn't
have the struct packed_git* handy. So I'm using the file descriptor
instead to perform the same test.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
* 'jc/attr': (28 commits)
lockfile: record the primary process.
convert.c: restructure the attribute checking part.
Fix bogus linked-list management for user defined merge drivers.
Simplify calling of CR/LF conversion routines
Document gitattributes(5)
Update 'crlf' attribute semantics.
Documentation: support manual section (5) - file formats.
Simplify code to find recursive merge driver.
Counto-fix in merge-recursive
Fix funny types used in attribute value representation
Allow low-level driver to specify different behaviour during internal merge.
Custom low-level merge driver: change the configuration scheme.
Allow the default low-level merge driver to be configured.
Custom low-level merge driver support.
Add a demonstration/test of customized merge.
Allow specifying specialized merge-backend per path.
merge-recursive: separate out xdl_merge() interface.
Allow more than true/false to attributes.
Document git-check-attr
Change attribute negation marker from '!' to '-'.
...
* lt/gitlink:
Tests for core subproject support
Expose subprojects as special files to "git diff" machinery
Fix some "git ls-files -o" fallout from gitlinks
Teach "git-read-tree -u" to check out submodules as a directory
Teach git list-objects logic to not follow gitlinks
Fix gitlink index entry filesystem matching
Teach "git-read-tree -u" to check out submodules as a directory
Teach git list-objects logic not to follow gitlinks
Don't show gitlink directories when we want "other" files
Teach git-update-index about gitlinks
Teach directory traversal about subprojects
Fix thinko in subproject entry sorting
Teach core object handling functions about gitlinks
Teach "fsck" not to follow subproject links
Add "S_IFDIRLNK" file mode infrastructure for git links
Add 'resolve_gitlink_ref()' helper function
Avoid overflowing name buffer in deep directory structures
diff-lib: use ce_mode_from_stat() rather than messing with modes manually
... which consists of existing code split out of packed_delta_info()
for other callers to use it as well.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This teaches the really fundamental core SHA1 object handling routines
about gitlinks. We can compare trees with gitlinks in them (although we
can not actually generate patches for them yet - just raw git diffs),
and they show up as commits in "git ls-tree".
We also know to compare gitlinks as if they were directories (ie the
normal "sort as trees" rules apply).
[jc: amended a cut&paste error]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
With this patch, packs larger than 4GB are usable, even on a 32-bit machine
(at least on Linux). If off_t is not large enough to deal with a large
pack then die() is called instead of attempting to use the pack and
producing garbage.
This was tested with a 8GB pack specially created for the occasion on
a 32-bit machine.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This patch introduces the MSB() macro to obtain the desired number of
most significant bits from a given variable independently of the variable
type.
It is then used to better implement the overflow test on the OBJ_OFS_DELTA
base offset variable with the property of always working correctly
regardless of the type/size of that variable.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
The coming index format change doesn't allow for the number of objects
to be determined from the size of the index file directly. Instead, Let's
initialize a field in the packed_git structure with the object count when
the index is validated since the count is always known at that point.
While at it let's reorder some struct packed_git fields to avoid padding
due to needed 64-bit alignment for some of them.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Let's avoid the open coded pack index reference in pack-object and use
nth_packed_object_sha1() instead. This will help encapsulating index
format differences in one place.
And while at it there is no reason to copy SHA1's over and over while a
direct pointer to it in the index will do just fine.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Acked-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
I stumbled across this in the context of the fchmod 0444 patch.
At first, I was going to unlink and call error like the two subsequent
tests do, but a failed write (above) provokes a "die", so I made
this do the same. This is testing for a write failure, after all.
Signed-off-by: Jim Meyering <jim@meyering.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
When some operations are interrupted (or "die()'d" or crashed) then the
partial object/pack/index file may remain around. Make it more obvious
in their name that those files are temporary stuff and can be cleaned up
if no operation is in progress.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
When creating a new object, we use "deflate(stream, Z_FINISH)" in a loop
until it no longer returns Z_OK, and then we do "deflateEnd()" to finish
up business.
That should all work, but the fact is, it's not how you're _supposed_ to
use the zlib return values properly:
- deflate() should never return Z_OK in the first place, except if we
need to increase the output buffer size (which we're not doing, and
should never need to do, since we pre-allocated a buffer that is
supposed to be able to hold the output in full). So the "while()" loop
was incorrect: Z_OK doesn't actually mean "ok, continue", it means "ok,
allocate more memory for me and continue"!
- if we got an error return, we would consider it to be end-of-stream,
but it could be some internal zlib error. In short, we should check
for Z_STREAM_END explicitly, since that's the only valid return value
anyway for the Z_FINISH case.
- we never checked deflateEnd() return codes at all.
Now, admittedly, none of these issues should ever happen, unless there is
some internal bug in zlib. So this patch should make zero difference, but
it seems to be the right thing to do.
We should probablybe anal and check the return value of "deflateInit()"
too!
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Use hash_sha1_file() instead of duplicating code to compute object SHA1.
While at it make it accept a const pointer.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
The thing is, if the output buffer is empty, we should *still* actually
use the zlib routines to *unpack* that empty output buffer.
But we had a test that said "only unpack if we still expect more output".
So we wouldn't use up all the zlib stream, because we felt that we didn't
need it, because we already had all the bytes we wanted. And it was
"true": we did have all the output data. We just needed to also eat all
the input data!
We've had this bug before - thinking that we don't need to inflate()
anything because we already had it all..
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This provides a smoother degradation in performance when the cache
gets trashed due to the delta_base_cache_limit being reached. Limited
testing with really small delta_base_cache_limit values appears to confirm
this.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Currently there are 3 different ways to deal with the cache size.
Let's stick to only one. The compiler is smart enough to produce the exact
same code in those cases anyway.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
The new configuration variable core.deltaBaseCacheLimit allows the
user to control how much memory they are willing to give to Git for
caching base objects of deltas. This is not normally meant to be
a user tweakable knob; the "out of the box" settings are meant to
be suitable for almost all workloads.
We default to 16 MiB under the assumption that the cache is not
meant to consume all of the user's available memory, and that the
cache's main purpose was to cache trees, for faster path limiters
during revision traversal. Since trees tend to be relatively small
objects, this relatively small limit should still allow a large
number of objects.
On the other hand we don't want the cache to start storing 200
different versions of a 200 MiB blob, as this could easily blow
the entire address space of a 32 bit process.
We evict OBJ_BLOB from the cache first (credit goes to Junio) as
we want to favor OBJ_TREE within the cache. These are the objects
that have the highest inflate() startup penalty, as they tend to
be small and thus don't have that much of a chance to ammortize
that penalty over the entire data.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
A malloc() + memcpy() will always be faster than mmap() +
malloc() + inflate(). If the data is already there it is
certainly better to copy it straight away.
With this patch below I can do 'git log drivers/scsi/ >
/dev/null' about 7% faster. I bet it might be even more on
those platforms with bad mmap() support.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This trivial 256-entry delta_base cache improves performance for some
loads by a factor of 2.5 or so.
Instead of always re-generating the delta bases (possibly over and over
and over again), just cache the last few ones. They often can get re-used.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This doesn't change any code, it just creates a point for where we'd
actually do the caching of delta bases that have been generated.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Especially with the new index format to come, it is more appropriate
to encapsulate more into check_packed_git_idx() and assume less of the
index format in struct packed_git.
To that effect, the index_base is renamed to index_data with void * type
so it is not used directly but other pointers initialized with it. This
allows for a couple pointer cast removal, as well as providing a better
generic name to grep for when adding support for new index versions or
formats.
And index_data is declared const too while at it.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
When accessing objects, we first look for them in packs that
are linked together in the reverse order of discovery.
Since younger packs tend to contain more recent objects, which
are more likely to be accessed often, and local packs tend to
contain objects more relevant to our specific projects, sort the
list of packs before starting to access them. In addition,
favoring local packs over the ones borrowed from alternates can
be a win when alternates are mounted on network file systems.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Some systems have sizeof(off_t) == 8 while sizeof(size_t) == 4.
This implies that we are able to access and work on files whose
maximum length is around 2^63-1 bytes, but we can only malloc or
mmap somewhat less than 2^32-1 bytes of memory.
On such a system an implicit conversion of off_t to size_t can cause
the size_t to wrap, resulting in unexpected and exciting behavior.
Right now we are working around all gcc warnings generated by the
-Wshorten-64-to-32 option by passing the off_t through xsize_t().
In the future we should make xsize_t on such problematic platforms
detect the wrapping and die if such a file is accessed.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Not all platforms have declared 'unsigned long' to be a 64 bit value,
but we want to support a 64 bit packfile (or close enough anyway)
in the near future as some projects are getting large enough that
their packed size exceeds 4 GiB.
By using off_t, the POSIX type that is declared to mean an offset
within a file, we support whatever maximum file size the underlying
operating system will handle. For most modern systems this is up
around 2^60 or higher.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
As we permit up to 2^32-1 objects in a single packfile we cannot
use a signed int to represent the object offset within a packfile,
after 2^31-1 objects we will start seeing negative indexes and
error out or compute bad addresses within the mmap'd index.
This is a minor cleanup that does not introduce any significant
logic changes. It is roach free.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
We shouldn't attempt to assign constant strings into char*, as the
string is not writable at runtime. Likewise we should always be
treating unsigned values as unsigned values, not as signed values.
Most of these are very straightforward. The only exception is the
(unnecessary) xstrdup/free in builtin-branch.c for the detached
head case. Since this is a user-level interactive type program
and that particular code path is executed no more than once, I feel
that the extra xstrdup call is well worth the easy elimination of
this warning.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
If an index is corrupt, or is simply too new for us to understand,
we were leaking the mmap that held the entire content of the index.
This could be a considerable size on large projects, given that
the index is at least 24 bytes * nr_objects.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Because we are currently cheating and never supplying the delta base
for an OBJ_OFS_DELTA we get a random SHA-1 in the delta base field.
Instead lets clear the hash out so its at least all 0's. This is
somewhat more obvious that something fishy is going on, like we
don't actually have the SHA-1 of the base handy. :)
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
We did not detect broken loose object files, either when
underlying inflate() signalled the breakage, nor inflate()
finished and we had garbage trailing at the end. We do better
now.
We also make unpack_sha1_file() a static function to
sha1_file.c, since it is not used by anybody outside.
Signed-off-by: Junio C Hamano <junkio@cox.net>
We currently have two parallel notation for dealing with object types
in the code: a string and a numerical value. One of them is obviously
redundent, and the most used one requires more stack space and a bunch
of strcmp() all over the place.
This is an initial step for the removal of the version using a char array
found in object reading code paths. The patch is unfortunately large but
there is no sane way to split it in smaller parts without breaking the
system.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Sometime typename() is used, sometimes type_names[] is accessed directly.
Let's enforce typename() all the time which allows for validating the
type.
Also let's add a function to go from a name to a type and use it instead
of manual memcpy() when appropriate.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
First there are too many offsets there and it is getting confusing.
So 'offset' is now 'curpos' to distinguish from other offsets like
'obj_offset'.
Then structures like x = foo(x, &y) are now done as y = foo(&x).
It looks more natural that the result y be returned directly and
x be passed as reference to be updated in place. This has the effect
of reducing some line length and removing a few, needing a bit less
stack space, and it even reduces the compiled code size.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Let's have hdr be a simple char pointer/array when possible, and let's
reduce its storage to 32 bytes. Especially for sha1_loose_object_info()
where 128 bytes is way excessive and wastes extra CPU cycles inflating.
The object type is already restricted to 10 bytes in parse_sha1_header()
and the size, even if it is 64 bits, will fit in 20 decimal numbers. So
32 bytes is plenty.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
* lt/crlf:
Teach core.autocrlf to 'git apply'
t0020: add test for auto-crlf
Make AutoCRLF ternary variable.
Lazy man's auto-CRLF
* jc/apply-config:
t4119: test autocomputing -p<n> for traditional diff input.
git-apply: guess correct -p<n> value for non-git patches.
git-apply: notice "diff --git" patch again
Fix botched "leak fix"
t4119: add test for traditional patch and different p_value
apply: fix memory leak in prefix_one()
git-apply: require -p<n> when working in a subdirectory.
git-apply: do not lose cwd when run from a subdirectory.
Teach 'git apply' to look at $HOME/.gitconfig even outside of a repository
Teach 'git apply' to look at $GIT_DIR/config
This ensures that a given area is mapped at most twice, and greatly
reduces the virtual address space usage.
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
Acked-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
It currently does NOT know about file attributes, so it does its
conversion purely based on content. Maybe that is more in the "git
philosophy" anyway, since content is king, but I think we should try to do
the file attributes to turn it off on demand.
Anyway, BY DEFAULT it is off regardless, because it requires a
[core]
AutoCRLF = true
in your config file to be enabled. We could make that the default for
Windows, of course, the same way we do some other things (filemode etc).
But you can actually enable it on UNIX, and it will cause:
- "git update-index" will write blobs without CRLF
- "git diff" will diff working tree files without CRLF
- "git checkout" will write files to the working tree _with_ CRLF
and things work fine.
Funnily, it actually shows an odd file in git itself:
git clone -n git test-crlf
cd test-crlf
git config core.autocrlf true
git checkout
git diff
shows a diff for "Documentation/docbook-xsl.css". Why? Because we have
actually checked in that file *with* CRLF! So when "core.autocrlf" is
true, we'll always generate a *different* hash for it in the index,
because the index hash will be for the content _without_ CRLF.
Is this complete? I dunno. It seems to work for me. It doesn't use the
filename at all right now, and that's probably a deficiency (we could
certainly make the "is_binary()" heuristics also take standard filename
heuristics into account).
I don't pass in the filename at all for the "index_fd()" case
(git-update-index), so that would need to be passed around, but this
actually works fine.
NOTE NOTE NOTE! The "is_binary()" heuristics are totally made-up by yours
truly. I will not guarantee that they work at all reasonable. Caveat
emptor. But it _is_ simple, and it _is_ safe, since it's all off by
default.
The patch is pretty simple - the biggest part is the new "convert.c" file,
but even that is really just basic stuff that anybody can write in
"Teaching C 101" as a final project for their first class in programming.
Not to say that it's bug-free, of course - but at least we're not talking
about rocket surgery here.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Here's a patch that I think we can merge right now. There may be
other places that need this, but this at least points out the
three places that read/write working tree files for git
update-index, checkout and diff respectively. That should cover
a lot of it [jc: git-apply uses an entirely different codepath
both for reading and writing].
Some day we can actually implement it. In the meantime, this
points out a place for people to start. We *can* even start with
a really simple "we do CRLF conversion automatically, regardless
of filename" kind of approach, that just look at the data (all
three cases have the _full_ file data already in memory) and
says "ok, this is text, so let's convert to/from DOS format
directly".
THAT somebody can write in ten minutes, and it would already
make git much nicer on a DOS/Windows platform, I suspect.
And it would be totally zero-cost if you just make it a config
option (but please make it dynamic with the _default_ just being
0/1 depending on whether it's UNIX/Windows, just so that UNIX
people can _test_ it easily).
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
The new interface allows an application to temporarily hash a
small number of objects and pretend that they are available in
the object store without actually writing them.
Signed-off-by: Junio C Hamano <junkio@cox.net>
If open_packed_git failed it may have been because the packfile
actually exists and is readable, but some sort of verification
did not pass. In this case open_packed_git left pack_fd filled
in, as the file descriptor is valid. We don't want to leak the
file descriptor, nor do we want to allow someone in the future
to use this packed_git.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Matthias Lederhofer identified a race condition where a Git reader
process was able to locate an object in a packed_git index, but
was then preempted while a `git repack -a -d` ran and completed.
By the time the reader was able to seek in the packfile to get the
object data, the packfile no longer existed on disk.
In this particular case the reader process did not attempt to
open the packfile before it was deleted, so it did not already
have the pack_fd field popuplated. With the packfile itself gone,
there was no way for the reader to open it and fetch the data.
I'm fixing the race condition by teaching find_pack_entry to ignore
a packed_git whose packfile is not currently open and which cannot
be opened. If none of the currently known packs can supply the
object, we will return 0 and the caller will decide the object is
not available. If this is the first attempt at finding an object,
the caller will reprepare_packed_git and try again. If it was
the second attempt, the caller will typically return NULL back,
and an error message about a missing object will be reported.
This patch does not address the situation of a reader which is
being starved out by a tight sequence of `git repack -a -d` runs.
In this particular case the reader will try twice, probably fail
both times, and declare the object in question cannot be found.
As it is highly unlikely that a real world `git repack -a -d` can
complete faster than a reader can open a packfile, so I don't think
this is a huge concern.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Because I want to reuse open_packed_git in a context where I don't
want the process to die if the packfile in question is bogus, I'm
changing its behavior to return error("...") rather than die("...")
when it detects something is wrong with the packfile it was given.
Right now we still must die out of use_pack should open_packed_git
fail, as none of use_pack's callers are prepared to handle a failure
from that function.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
After staring at the comment and the associated for loop, I
realized the comment was completely bogus. The section of
code its talking about is trying to avoid duplicate mapping
of the same packfile.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
There is little point in having the linked list insertion code
appearing in install_packed_git, and then again just 30 lines
further down in the same file.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
We used to call find_pack_entry() twice from read_sha1_file() in order
to avoid printing an error message, when the object did not exist. This
is fixed by moving the call to error() to the only place it really
could be called.
Signed-off-by: Peter Eriksen <s022018@student.dtu.dk>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Way back when Junio developed the 64 bit index topic he came up
with a means of changing the .idx file format so that older Git
clients would recognize that they don't understand the file and
refuse to read it, while newer clients could tell the difference
between the old-style and new-style .idx files. Unfortunately
this wasn't recorded anywhere.
This change documents how we might go about changing the .idx
file format by using a special signature in the first four bytes.
Credit (and possible blame) goes completely to Junio for thinking
up this technique.
The change also modifies the error message of the current Git code
so that users get a recommendation to upgrade their Git software
should this version or later encounter a new-style .idx which it
cannot process. We already do this for the .pack files, but since
we usually process the .idx files first its important that these
files are recognized and encourage an upgrade.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Originally I introduced read_or_die for the purpose of reading
the pack header and trailer, and I was too lazy to print proper
error messages.
Linus Torvalds <torvalds@osdl.org>:
> For a read error, at the very least you have to say WHICH FILE
> couldn't be read, because it's usually a matter of some file just
> being too short, not some system-wide problem.
and of course Linus is right. Make it so.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
With the new-and-improved write_in_full() semantics, where a partial write
simply always returns a real error (and always sets 'errno' when that
happens, including for the disk full case), a lot of the callers of
write_in_full() were just unnecessarily complex.
In particular, there's no reason to ever check for a zero length or
return: if the length was zero, we'll return zero, otherwise, if a disk
full resulted in the actual write() system call returning zero the
write_in_full() logic would have correctly turned that into a negative
return value, with 'errno' set to ENOSPC.
I really wish every "write_in_full()" user would just check against "<0"
now, but this fixes the nasty and stupid ones.
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Unfortunately, while {read,write}_in_full do take into account
zero-sized reads/writes; their die and whine variants do not.
I have a repository where there are zero-sized files in
the history that was triggering these things.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
We have a number of badly checked write() calls. Often we are
expecting write() to write exactly the size we requested or fail,
this fails to handle interrupts or short writes. Switch to using
the new write_in_full(). Otherwise we at a minimum need to check
for EINTR and EAGAIN, where this is appropriate use xwrite().
Note, the changes to config handling are much larger and handled
in the next patch in the sequence.
Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
We have a number of badly checked read() calls. Often we are
expecting read() to read exactly the size we requested or fail, this
fails to handle interrupts or short reads. Add a read_in_full()
providing those semantics. Otherwise we at a minimum need to check
for EINTR and EAGAIN, where this is appropriate use xread().
Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
I do not have any proof that this matters to any existing
problems I am seeing, but I do not think of any reason not to do
this.
Signed-off-by: Junio C Hamano <junkio@cox.net>
In some cases we did not even bother to check the return value of
mmap() and just assume it worked. This is bad, because if we are
out of virtual address space the kernel returned MAP_FAILED and we
would attempt to dereference that address, segfaulting without any
real error output to the user.
We are replacing all calls to mmap() with xmmap() and moving all
MAP_FAILED checking into that single location. If a mmap call
fails we try to release enough least-recently-used pack windows
to possibly succeed, then retry the mmap() attempt. If we cannot
mmap even after releasing pack memory then we die() as none of our
callers have any reasonable recovery strategy for a failed mmap.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
If we are about to fail because this process has run out of memory we
should first try to automatically control our appetite for address
space by releasing enough least-recently-used pack windows to gain
back enough memory such that we might actually be able to meet the
current allocation request.
This should help users who have fairly large repositories but are
working on systems with relatively small virtual address space.
Many times we see reports on the mailing list of these users running
out of memory during various Git operations. Dynamically decreasing
the amount of pack memory used when the demand for heap memory is
increasing is an intelligent solution to this problem.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Much like the alloc_report() function can be useful to report on
object allocation statistics while debugging the new pack_report()
function can be useful to report on the behavior of the mmap window
code used for packfile access.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
If a command opens a packfile for only temporary access and does not
install the struct packed_git* into the global packed_git list then
we are unable to unmap any inactive windows within that packed_git,
causing the overall process to exceed core.packedGitLimit.
We cannot force the callers to install their temporary packfile
into the packed_git chain as doing so would allow that (possibly
corrupt but currently being verified) temporary packfile to become
part of the local ODB, which may allow it to be considered for
object resolution when it may not actually be a valid packfile.
So to support unmapping the windows of these temporary packfiles we
also scan the windows of the struct packed_git which was supplied
to use_pack(). Since commands only work with one temporary packfile
at a time scanning the one supplied to use_pack() and all packs
installed into packed_git should cover everything available in
memory.
We also have to be careful to not close the file descriptor of
the packed_git which was handed to use_pack() when all of that
packfile's windows have been unmapped, as we are already past the
open call that would open the packfile and need the file descriptor
to be ready for mmap() after unuse_one_window returns.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
If we are unable to mmap the a region of the packfile with the mmap()
system call there may be a good reason why, such as a closed file
descriptor or out of address space. Reporting the system level
error message can help to debug such problems.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This finally turns on the sliding window behavior for packfile data
access by mapping limited size windows and chaining them under the
packed_git->windows list.
We consider a given byte offset to be within the window only if there
would be at least 20 bytes (one hash worth of data) accessible after
the requested offset. This range selection relates to the contract
that use_pack() makes with its callers, allowing them to access
one hash or one object header without needing to call use_pack()
for every byte of data obtained.
In the worst case scenario we will map the same page of data twice
into memory: once at the end of one window and once again at the
start of the next window. This duplicate page mapping will happen
only when an object header or a delta base reference is spanned
over the end of a window and is always limited to just one page of
duplication, as no sane operating system will ever have a page size
smaller than a hash.
I am assuming that the possible wasted page of virtual address
space is going to perform faster than the alternatives, which
would be to copy the object header or ref delta into a temporary
buffer prior to parsing, or to check the window range on every byte
during header parsing. We may decide to revisit this decision in
the future since this is just a gut instinct decision and has not
actually been proven out by experimental testing.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
To support multiple windows per packfile we need to unmap only one
window at a time from that packfile, leaving any other windows in
place and available for reference.
We treat all windows from all packfiles equally; the least recently
used, not-in-use window across all packfiles will always be closed
first.
If we have unmapped all windows in a packfile then we can also close
the packfile's file descriptor as its possible we won't need to map
any window from that file in the near future. This decision about
when to close the pack file descriptor may need to be revisited in
the future after additional testing on several different platforms
can be performed.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
When we parse the object header or the delta base reference we
don't bother to loop over use_pack() calls. The reason we don't
need to bother with calling use_pack for each byte accessed is that
use_pack will always promise us at least 20 bytes (really the hash
size) after the offset. This promise from use_pack simplifies a
lot of code in the header parsing logic, as well as helps out the
zlib library by ensuring there's always some data for it to consume
during an inflate call.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
When multiple mmaps start getting used for all pack file access it
is not possible to get all data associated with a specific object
in one contiguous memory region. This limitation prevents simply
passing a single address and length to SHA1_Update or to inflate.
Instead we need to loop until we have processed all data of interest.
As we loop over the data we are always interested in reusing the same
window 'cursor', as the prior window will no longer be of any use
to us. This allows the use_pack() call to automatically decrement
the use count of the prior window before setting up access for us
to the next window.
Within each loop we need to make use of the available length output
parameter of use_pack() to tell us how many bytes are available in
the current memory region, as we cannot tell otherwise.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Part of the implementation concept of the sliding mmap window for
pack access is to permit multiple windows per pack to be mapped
independently. Since the inuse_cnt is associated with the mmap and
not with the file, this value is in struct pack_window and needs to
be incremented/decremented for each pack_window accessed by any code.
To faciliate that implementation we need to replace all uses of
use_packed_git() and unuse_packed_git() with a different API that
follows struct pack_window objects rather than struct packed_git.
The way this works is when we need to start accessing a pack for
the first time we should setup a new window 'cursor' by declaring
a local and setting it to NULL:
struct pack_windows *w_curs = NULL;
To obtain the memory region which contains a specific section of
the pack file we invoke use_pack(), supplying the address of our
current window cursor:
unsigned int len;
unsigned char *addr = use_pack(p, &w_curs, offset, &len);
the returned address `addr` will be the first byte at `offset`
within the pack file. The optional variable len will also be
updated with the number of bytes remaining following the address.
Multiple calls to use_pack() with the same window cursor will
update the window cursor, moving it from one window to another
when necessary. In this way each window cursor variable maintains
only one struct pack_window inuse at a time.
Finally before exiting the scope which originally declared the window
cursor we must invoke unuse_pack() to unuse the current window (which
may be different from the one that was first obtained from use_pack):
unuse_pack(&w_curs);
This implementation is still not complete with regards to multiple
windows, as only one window per pack file is supported right now.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
To efficiently support mmaping of multiple regions of the same pack
file we want to keep the pack's file descriptor open while we are
actively working with that pack. So we are now keeping that file
descriptor in packed_git.pack_fd and closing it only after we unmap
the last window.
This is going to increase the number of file descriptors that are
in use at once, however that will be bounded by the total number of
pack files present and therefore should not be very high. It is
a small tradeoff which we may need to revisit after some testing
can be done on various repositories and systems.
For code clarity we also want to seperate out the implementation
of how we open a pack file from the implementation which locates
a suitable window (or makes a new one) from the given pack file.
Since this is a rather large delta I'm taking advantage of doing
it now, in a fairly isolated change.
When we open a pack file we need to examine the header and trailer
without having a mmap in place, as we may only need to mmap
the middle section of this particular pack. Consequently the
verification code has been refactored to make use of the new
read_or_die function.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
The idea behind the sliding mmap window pack reader implementation
is to have multiple mmap regions active against the same pack file,
thereby allowing the process to mmap in only the active/hot sections
of the pack and reduce overall virtual address space usage.
To implement this we need to refactor the mmap related data
(pack_base, pack_use_cnt) out of struct packed_git and move them
into a new struct pack_window.
We are refactoring the code to support a single struct pack_window
per packfile, thereby emulating the prior behavior of mmap'ing the
entire pack file.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Rather than hardcoding the maximum number of bytes which can be
mmapped from pack files we should make this value configurable,
allowing the end user to increase or decrease this limit on a
per-repository basis depending on the size of the repository
and the capabilities of their operating system.
In general users should not need to manually tune such a low-level
setting within the core code, but being able to artifically limit
the number of bytes which we can mmap at once from pack files will
make it easier to craft test cases for the new mmap sliding window
implementation.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
The unpack_entry_gently function currently has only two callers:
the delta base resolution in sha1_file.c and the main loop of
pack-check.c. Both of these must change to using unpack_entry
directly when we implement sliding window mmap logic, so I'm doing
it earlier to help break down the change set.
This may cause a slight performance decrease for delta base
resolution as well as for pack-check.c's verify_packfile(), as
the pack use counter will be incremented and decremented for every
object that is unpacked.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
If ever new object types are added for future extensions then better
have current git version report them as "unknown" instead of
"corrupted".
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
We used to try loose objects first with sha1_object_info(), but packed
objects first with read_sha1_file(). Now, prefer packed objects over loose
ones with sha1_object_info(), too.
Usually the old behaviour would pose no problem, but when you tried to fix
a fscked up repository by inserting a known-good pack,
git cat-file $(git cat-file -t <sha1>) <sha1>
could fail, even when
git cat-file blob <sha1>
would _not_ fail. Worse, a repack would fail, too.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Currently the error e.g. when pushing to a read-only repository is quite
confusing, this attempts to clean it up, unifies error reporting between
various object writers and uses error() on couple more places.
Signed-off-by: Petr Baudis <pasky@suse.cz>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Since keeping a pushed pack or exploding it into loose objects
should be a local repository decision this teaches receive-pack
to decide if it should call unpack-objects or index-pack --stdin
--fix-thin based on the setting of receive.unpackLimit and the
number of objects contained in the received pack.
If the number of objects (hdr_entries) in the received pack is
below the value of receive.unpackLimit (which is 5000 by default)
then we unpack-objects as we have in the past.
If the hdr_entries >= receive.unpackLimit then we call index-pack and
ask it to include our pid and hostname in the .keep file to make it
easier to identify why a given pack has been kept in the repository.
Currently this leaves every received pack as a kept pack. We really
don't want that as received packs will tend to be small. Instead we
want to delete the .keep file automatically after all refs have
been updated. That is being left as room for future improvement.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
* master: (90 commits)
gitweb: Better support for non-CSS aware web browsers
gitweb: Output also empty patches in "commitdiff" view
gitweb: Use git-for-each-ref to generate list of heads and/or tags
for-each-ref: "creator" and "creatordate" fields
Add --global option to git-repo-config.
pack-refs: Store the full name of the ref even when packing only tags.
git-clone documentation didn't mention --origin as equivalent of -o
Minor grammar fixes for git-diff-index.txt
link_temp_to_file: call adjust_shared_perm() only when we created the directory
Remove uneccessarily similar printf() from print_ref_list() in builtin-branch
pack-objects doesn't create random pack names
branch: work in subdirectories.
gitweb: Use 's' regexp modifier to secure against filenames with LF
gitweb: Secure against commit-ish/tree-ish with the same name as path
gitweb: esc_html() author in blame
git-svnimport: support for partial imports
link_temp_to_file: don't leave the path truncated on adjust_shared_perm failure
Move deny_non_fast_forwards handling completely into receive-pack.
revision traversal: --unpacked does not limit commit list anymore.
Continue traversal when rev-list --unpacked finds a packed commit.
...
* maint:
git-clone documentation didn't mention --origin as equivalent of -o
Minor grammar fixes for git-diff-index.txt
link_temp_to_file: call adjust_shared_perm() only when we created the directory
This allows us to pass just the file name of a pack rather than
the complete path when we want pack-objects to consider its
contents as though they were loose objects. This can be helpful
if $GIT_OBJECT_DIRECTORY contains shell metacharacters which make
it cumbersome to pass complete paths safely in a shell script.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
* np/pack:
add the capability for index-pack to read from a stream
index-pack: compare only the first 20-bytes of the key.
git-repack: repo.usedeltabaseoffset
pack-objects: document --delta-base-offset option
allow delta data reuse even if base object is a preferred base
zap a debug remnant
let the GIT native protocol use offsets to delta base when possible
make pack data reuse compatible with both delta types
make git-pack-objects able to create deltas with offset to base
teach git-index-pack about deltas with offset to base
teach git-unpack-objects about deltas with offset to base
introduce delta objects with offset to base
Supposing that both the base and result sizes were both full size 64-bit
values, their encoding would occupy only 9.2 bytes each. Therefore
inflating 64 bytes is way overkill. Limit it to 20 bytes instead which
should be plenty enough for a couple years to come.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Move file name generation from write_sha1_file_prepare() to the one
caller that cares and make it a void function.
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Most callers of write_sha1_file_prepare() are only interested in the
resulting hash but don't care about the returned file name or the header.
This patch adds a simple wrapper named hash_sha1_file() which does just
that, and converts potential callers.
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This is the missing part to git-pack-objects allowing it to reuse delta
data to/from any of the two delta types. It can reuse delta from any
type, and it outputs base offsets when --allow-delta-base-offset is
provided and the base is also included in the pack. Otherwise it
outputs base sha1 references just like it always did.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This adds a new object, namely OBJ_OFS_DELTA, renames OBJ_DELTA to
OBJ_REF_DELTA to better make the distinction between those two delta
objects, and adds support for the handling of those new delta objects
in sha1_file.c only.
The OBJ_OFS_DELTA contains a relative offset from the delta object's
position in a pack instead of the 20-byte SHA1 reference to identify
the base object. Since the base is likely to be not so far away, the
relative offset is more likely to have a smaller encoding on average
than an absolute offset. And for those delta objects the base must
always be stored first because there is no way to know the distance of
later objects when streaming a pack. Hence this relative offset is
always meant to be negative.
The offset encoding is slightly denser than the one used for object
size -- credits to <linux@horizon.com> (whoever this is) for bringing
it to my attention.
This allows for pack size reduction between 3.2% (Linux-2.6) to over 5%
(linux-historic). Runtime pack access should be faster too since delta
replay does skip a search in the pack index for each delta in a chain.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Those cleanups are mainly to set the table for the support of deltas
with base objects referenced by offsets instead of sha1. This means
that many pack lookup functions are converted to take a pack/offset
tuple instead of a sha1.
This eliminates many struct pack_entry usages since this structure
carried redundent information in many cases, and it increased stack
footprint needlessly for a couple recursively called functions that used
to declare a local copy of it for every recursion loop.
In the process, packed_object_info_detail() has been reorganized as well
so to look much saner and more amenable to deltas with offset support.
Finally the appropriate adjustments have been made to functions that
depend on the above changes. But there is no functionality changes yet
simply some code refactoring at this point.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
builtin-mailinfo.c has its own hexval implementaiton but it can
share the table-lookup one recently implemented in sha1_file.c
Signed-off-by: Junio C Hamano <junkio@cox.net>
* jc/pack:
pack-objects: document --revs, --unpacked and --all.
pack-objects --unpacked=<existing pack> option.
pack-objects: further work on internal rev-list logic.
pack-objects: run rev-list equivalent internally.
Separate object listing routines out of rev-list
The function appeared high on a gprof output for a rev-list run of
a non-trivial size, and it was an obvious low-hanging fruit.
The code is from Linus.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Incremental repack without -a essentially boils down to:
rev-list --objects --unpacked --all |
pack-objects $new_pack
which picks up all loose objects that are still live and creates
a new pack.
This implements --unpacked=<existing pack> option to tell the
revision walking machinery to pretend as if objects in such a
pack are unpacked for the purpose of object listing. With this,
we could say:
rev-list --objects --unpacked=$active_pack --all |
pack-objects $new_pack
instead, to mean "all live loose objects but pretend as if
objects that are in this pack are also unpacked". The newly
created pack would be perfect for updating $active_pack by
replacing it.
Since pack-objects now knows how to do the rev-list's work
itself internally, you can also write the above example by:
pack-objects --unpacked=$active_pack --all $new_pack </dev/null
Signed-off-by: Junio C Hamano <junkio@cox.net>
When copying from an existing pack and when copying from a loose
object with new style header, the code makes sure that the piece
we are going to copy out inflates well and inflate() consumes
the data in full while doing so.
The check to see if the xdelta really apply is quite expensive
as you described, because you would need to have the image of
the base object which can be represented as a delta against
something else.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Like xmalloc and xrealloc xstrdup dies with a useful message if
the native strdup() implementation returns NULL rather than a
valid pointer.
I just tried to use xstrdup in new code and found it to be missing.
However I expected it to be present as xmalloc and xrealloc are
already commonly used throughout the code.
[jc: removed the part that deals with last_XXX, which I am
finding more and more dubious these days.]
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Also while we are at it, remove redundant typename[] array from
unpack_sha1_header. The only reason it is different from the
type_names[] array in object.c module is that this code cares
about the subset of object types that are valid in a loose
object, so prepare a separate array of boolean that tells us
which types are valid, and share the name translation with the
others.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Change places that use realloc, without a proper error path, to instead use
xrealloc. Drop an erroneous error path in the daemon code that used errno
in the die message in favour of the simpler xrealloc.
Signed-off-by: Jonas Fonseca <fonseca@diku.dk>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Change unpack_entry_gently and its helper functions to use offsets
rather than addresses and left counts to supply pack position
information. In most cases this makes the code easier to follow,
and it reduces the number of local variables in a few functions.
It also better prepares this code for mapping partial segments of
packs and altering what regions of a pack are mapped while unpacking
an entry.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
If we're always incrementing both the offset and the pointer we
aren't gaining anything by keeping both. Instead just use the
offset since that's what we were given and what we are expected
to return. Also using offset is likely to make it easier to remap
the pack in the future should partial mapping of very large packs
get implemented.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
[PATCH 3/5] Cleanup unpack_entry_gently and friends to use type_name array.
This change allows combining all of the non-delta entries into a
single case, as well as to remove an unnecessary local variable
in unpack_entry_gently.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
[PATCH 2/5] Reuse compression code in unpack_compressed_entry.
This cleans up the code by reusing a perfectly good decompression
implementation at the expense of 1 extra byte of memory allocated in
temporary memory while the delta is being decompressed and applied
to the base.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This function was moved above unpack_delta_entry so we can call it
from within unpack_delta_entry without a forward declaration.
This change looks worse than it is. Its really just a relocation
of unpack_non_delta_entry to earlier in the file and renaming the
function to unpack_compressed_entry. No other changes were made.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This abstracts away the size of the hash values when copying them
from memory location to memory location, much as the introduction
of hashcmp abstracted away hash value comparsion.
A few call sites were using char* rather than unsigned char* so
I added the cast rather than open hashcpy to be void*. This is a
reasonable tradeoff as most call sites already use unsigned char*
and the existing hashcmp is also declared to be unsigned char*.
[jc: Splitted the patch to "master" part, to be followed by a
patch for merge-recursive.c which is not in "master" yet.
Fixed the cast in the latter hunk to combine-diff.c which was
wrong in the original.
Also converted ones left-over in combine-diff.c, diff-lib.c and
upload-pack.c ]
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This declaration probably used to be necessary but the code has
been refactored since to use unpack_entry_gently instead.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
If the pack format were to ever change or be extended in the future
there is no assurance that just because the pack file lives in
objects/pack and doesn't end in .idx that we can read and decompress
its contents properly.
If we encounter what we think is a pack file and it isn't or we don't
recognize its version then die and suggest to the user that they
upgrade to a newer version of GIT which can handle that pack file.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Introduces global inline:
hashcmp(const unsigned char *sha1, const unsigned char *sha2)
Uses memcmp for comparison and returns the result based on the length of
the hash name (a future runtime decision).
Acked-by: Alex Riesen <raa.lkml@gmail.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
[jc: I needed to hand merge the changes to the updated codebase,
so the result needs to be checked.]
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
As Fredrik points out the current interface of has_extension() is
potentially confusing. Its parameters include both a nul-terminated
string and a length-limited string.
This patch drops the length argument, requiring two nul-terminated
strings; all callsites are updated. I checked that all of them indeed
provide nul-terminated strings. Filenames need to be nul-terminated
anyway if they are to be passed to open() etc. The performance penalty
of the additional strlen() is negligible compared to the system calls
which inevitably surround has_extension() calls.
Additionally, change has_extension() to use size_t inside instead of
int, as that is the exact type strlen() returns and memcmp() expects.
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Junio C Hamano <junkio@cox.net>
The little helper has_extension() documents through its name what we are
trying to do and makes sure we don't forget the underrun check.
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This exposes map_sha1_file() interface to mmap a loose object file,
and legacy_loose_object() function, split from unpack_sha1_header().
They will be used in the next patch to reuse the deflated data from
new-style loose object files when generating packs.
Signed-off-by: Junio C Hamano <junkio@cox.net>
The pack-file format is slightly different from the traditional git
object format, in that it has a much denser binary header encoding.
The traditional format uses an ASCII string with type and length
information, which is somewhat wasteful.
A new object format starts with uncompressed binary header
followed by compressed payload -- this will allow us later to
copy the payload straight to packfiles.
Obviously they cannot be read by older versions of git, so for
now new object files are created with the traditional format.
core.legacyheaders configuration item, when set to false makes
the code write in new format for people to experiment with.
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Linus Torvalds <torvalds@osdl.org> wrote:
It's entirely possible that we should just make that whole
if (ret == ENOENT)
go away. Yes, it's the right error code if a subdirectory is missing, and
yes, POSIX requires it, and yes, WXP is probably just a horrible piece of
sh*t, but on the other hand, I don't think git really has any serious
reason to even care.
Nobody else uses them, and I'm going to start changing them.
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This doesn't make the code uglier or harder to read, yet it makes the
code more portable. This also simplifies checking for other potential
incompatibilities. "gcc -std=c89 -pedantic" can flag many incompatible
constructs as warnings, but C99 comments will cause it to emit an error.
Signed-off-by: Pavel Roskin <proski@gnu.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
The only visible change is that git-blame doesn't understand
"--compability" anymore, but it does accept "--compatibility" instead,
which is already documented.
Signed-off-by: Pavel Roskin <proski@gnu.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
With the change in default, "git add ." on kernel dir is about
twice as fast as before, with only minimal (0.5%) change in
object size. The speed difference is even more noticeable
when committing large files, which is now up to 8 times faster.
The configurability is through setting core.compression = [-1..9]
which maps to the zlib constants; -1 is the default, 0 is no
compression, and 1..9 are various speed/size tradeoffs, 9
being slowest.
Signed-off-by: Joachim B Haga (cjhaga@fys.uio.no)
Acked-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
ANSI C99 doesn't allow void-pointer arithmetic. This patch fixes this in
various ways. Usually the strategy that required the least changes was used.
Signed-off-by: Florian Forster <octo@verplant.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
There were a few calls to adjust_shared_perm() that were
missing:
- init-db creates refs, refs/heads, and refs/tags before
reading from templates that could specify sharedrepository in
the config file;
- updating config file created it under user's umask without
adjusting;
- updating refs created it under user's umask without
adjusting;
- switching branches created .git/HEAD under user's umask
without adjusting.
This moves adjust_shared_perm() from sha1_file.c to path.c,
since a few SIMPLE_PROGRAM need to call repository configuration
functions which in turn need to call adjust_shared_perm().
sha1_file.c needs to link with SHA1 computation library which
is usually not linked to SIMPLE_PROGRAM.
Signed-off-by: Junio C Hamano <junkio@cox.net>
When adding packs, skip the pack if we already have it in the packed_git
list. This might happen if we are re-preparing our packs because of a
missing object.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This patch causes read_sha1_file and sha1_object_info to re-examine the
list of packs if an object cannot be found. It works by re-running
prepare_packed_git() after an object fails to be found.
It does not attempt to clean up the old pack list. Old packs which are in
use can continue to be used (until unused by lru selection). New packs
are placed at the front of the list and will thus be examined before old
packs.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This cleans up and future-proofs the sha1 file writing in sha1_file.c.
In particular, instead of doing a simple "write()" call and just verifying
that it succeeds (or - as in one place - just assuming it does), it uses
"write_buffer()" to write data to the file descriptor while correctly
checking for partial writes, EINTR etc.
It also splits up write_sha1_to_fd() to be a lot more readable: if we need
to re-create the compressed object, we do so in a separate helper
function, making the logic a whole lot more modular and obvious.
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
* fix:
Fix pack-index issue on 64-bit platforms a bit more portably.
Install git-send-email by default
Fix compilation on newer NetBSD systems
git config syntax updates
Another config file parsing fix.
checkout: use --aggressive when running a 3-way merge (-m).
Apparently <stdint.h> is not enough for uint32_t on OpenBSD; use
"unsigned int" -- hopefully that would stay 32-bit on every
platform we care about, at least until we update the pack-index
file format.
Our sha1 routines optimized for architectures use uint32_t and
expects '#include <stdint.h>' to be enough, so OpenBSD on arm or
ppc might have similar issues down the road, I dunno.
Signed-off-by: Junio C Hamano <junkio@cox.net>
The offset of an object in the pack is recorded as a 4-byte integer
in the index file. When reading the offset from the mmap'ed index
in prepare_pack_revindex(), the address is dereferenced as a long*.
This works fine as long as the long type is four bytes wide. On
NetBSD/sparc64, however, a long is 8 bytes wide and so dereferencing
the offset produces garbage.
[jc: taking suggestion by Linus to use uint32_t]
Signed-off-by: Dennis Stosberg <dennis@stosberg.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
When adding an alternate object store then add entries from its
info/alternates files, too.
Relative entries are only allowed in the current repository.
Loops and duplicate alternates through multiple repositories are ignored.
Just to be sure that nothing breaks it is not allow to build deep
nesting levels using info/alternates.
Signed-off-by: Martin Waitz <tali@admingilde.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Somebody on the #git channel complained that the sha1_to_hex() thing uses
a static buffer which caused an error message to show the same hex output
twice instead of showing two different ones.
That's pretty easily rectified by making it uses a simple LRU of a few
buffers, which also allows some other users (that were aware of the buffer
re-use) to be written in a more straightforward manner.
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Serge E. Hallyn noticed that we compute how many input bytes are
still left, but did not use it for sanity checking.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This replaces occurences of "blob", "commit", "tag", and "tree",
where they're really used as type specifiers, which we already
have defined global constants for.
Signed-off-by: Peter Eriksen <s022018@student.dtu.dk>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Currently we unpack the delta data from the pack and then unpack
the base object to apply that delta data to it. When getting an
object that is deeply deltified, we can reduce memory footprint
by unpacking the base object first and then unpacking the delta
data, because we will need to keep at most one delta data in
memory that way.
Signed-off-by: Junio C Hamano <junkio@cox.net>
When generating a new pack, notice if we have already needed
objects in existing packs. If an object is stored deltified,
and its base object is also what we are going to pack, then
reuse the existing deltified representation unconditionally,
bypassing all the expensive find_deltas() and try_deltas()
calls.
Also, notice if what we are going to write out exactly match
what is already in an existing pack (either deltified or just
compressed). In such a case, we can just copy it instead of
going through the usual uncompressing & recompressing cycle.
Without this patch, in linux-2.6 repository with about 1500
loose objects and a single mega pack:
$ git-rev-list --objects v2.6.16-rc3 >RL
$ wc -l RL
184141 RL
$ time git-pack-objects p <RL
Generating pack...
Done counting 184141 objects.
Packing 184141 objects....................
a1fc7b3e537fcb9b3c46b7505df859f0a11e79d2
real 12m4.323s
user 11m2.560s
sys 0m55.950s
With this patch, the same input:
$ time ../git.junio/git-pack-objects q <RL
Generating pack...
Done counting 184141 objects.
Packing 184141 objects.....................
a1fc7b3e537fcb9b3c46b7505df859f0a11e79d2
Total 184141, written 184141, reused 182441
real 1m2.608s
user 0m55.090s
sys 0m1.830s
Signed-off-by: Junio C Hamano <junkio@cox.net>
The real problem triggered an earlier fix was that an alternate
entry was pointing at a removed directory. Complaining on
object/pack directory that cannot be opendir-ed produces noise
in an ancient repository that does not have object/pack
directory and has never been packed.
Detect the real user error and report it. Also if opendir
failed for other reasons (e.g. no read permissions), report that
as well.
Spotted by Andrew Vasquez <andrew.vasquez@qlogic.com>.
Signed-off-by: Junio C Hamano <junkio@cox.net>
* jc/pack-reuse:
pack-objects: avoid delta chains that are too long.
git-repack: allow passing a couple of flags to pack-objects.
pack-objects: finishing touches.
pack-objects: reuse data from existing packs.
When generating a new pack, notice if we have already needed
objects in existing packs. If an object is stored deltified,
and its base object is also what we are going to pack, then
reuse the existing deltified representation unconditionally,
bypassing all the expensive find_deltas() and try_deltas()
calls.
Also, notice if what we are going to write out exactly match
what is already in an existing pack (either deltified or just
compressed). In such a case, we can just copy it instead of
going through the usual uncompressing & recompressing cycle.
Without this patch, in linux-2.6 repository with about 1500
loose objects and a single mega pack:
$ git-rev-list --objects v2.6.16-rc3 >RL
$ wc -l RL
184141 RL
$ time git-pack-objects p <RL
Generating pack...
Done counting 184141 objects.
Packing 184141 objects....................
a1fc7b3e537fcb9b3c46b7505df859f0a11e79d2
real 12m4.323s
user 11m2.560s
sys 0m55.950s
With this patch, the same input:
$ time ../git.junio/git-pack-objects q <RL
Generating pack...
Done counting 184141 objects.
Packing 184141 objects.....................
a1fc7b3e537fcb9b3c46b7505df859f0a11e79d2
Total 184141, written 184141, reused 182441
real 1m2.608s
user 0m55.090s
sys 0m1.830s
Signed-off-by: Junio C Hamano <junkio@cox.net>
Use stat() to explicitly check for existence rather than
relying on the non-portable EEXIST error in sha1_file.c's
safe_create_leading_directories(). There certainly are
optimizations possible, but then the code becomes almost
the same as that in coreutil's lib/mkdir-p.c.
Other uses of EEXIST seem ok. Tested on Solaris 8, AIX 5.2L,
and a few Linux versions. AIX has some unrelated (I think)
failures right now; I haven't tried many recent gits there.
Anyone have an old Ultrix box to break everything? ;)
Also remove extraneous #includes. Everything's already in
git-compat-util.h, included through cache.h.
Signed-off-by: Jason Riedy <ejr@cs.berkeley.edu>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This reverts c5ced64578 commit.
It turns out that doing this check every time we map the idx file
is quite expensive. A corrupt idx file is caught by git-fsck-objects,
so this check is not strictly necessary.
In one unscientific test, 0.99.9m spent 10 seconds usertime for
the same task 1.1.3 takes 37 seconds usertime. Reverting this gives
us the performance of 0.99.9 back.
If the config variable 'core.sharedrepository' is set, the directories
$GIT_DIR/objects/
$GIT_DIR/objects/??
$GIT_DIR/objects/pack
$GIT_DIR/refs
$GIT_DIR/refs/heads
$GIT_DIR/refs/heads/tags
are set group writable (and g+s, since the git group may be not the primary
group of all users).
Since all files are written as lock files first, and then moved to
their destination, they do not have to be group writable. Indeed, if
this leads to problems you found a bug.
Note that -- as in my first attempt -- the config variable is set in the
function which checks the repository format. If this were done in
git_default_config instead, a lot of programs would need to be modified
to call git_config(git_default_config) first.
[jc: git variables should be in environment.c unless there is a
compelling reason to do otherwise.]
Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Although pack-check.c had routine to verify the checksum for the
pack index file itself, the core did not check it before using
it.
This is stolen from the patch to tighten packname requirements.
Signed-off-by: Junio C Hamano <junkio@cox.net>
(cherry picked from 797bd6f490 commit)
Although pack-check.c had routine to verify the checksum for the
pack index file itself, the core did not check it before using
it.
This is stolen from the patch to tighten packname requirements.
Signed-off-by: Junio C Hamano <junkio@cox.net>
sha1_to_hex() returns a pointer to a static buffer. Some of its users
modify that buffer by appending a newline character. Other users rely
on the fact that you can call
printf("%s", sha1_to_hex(sha1));
Just to be on the safe side, terminate the SHA1 in sha1_to_hex().
Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
fprintf and die sometimes have missing/excessive "\n" in their arguments,
correct the strings where I think it would be appropriate.
Signed-off-by: Alex Riesen <raa.lkml@gmail.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
add_packed_git() tries to get the pack SHA1 by parsing its name. It may
access uninitialized memory for packs with short names.
Signed-off-by: Pavel Roskin <proski@gnu.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
In order to support getting data into git with scripts, this adds a
--stdin option to git-hash-object, which will make it read from stdin.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
We somehow ended up registering packs in alternate object
directories as "dir/object//pack/pack-*", which confusd the
update-server-info code very badly. Also we did not attempt to
detect a mistake of listing the object directory itself as one
of the alternates. This does not lead to incorrect behaviour,
but is simply wasteful, so try to do so when we are trivially
able to.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This patch adds the program git-pack-intersect. It is
used to find redundant packs in git repositories.
Signed-off-by: Lukas Sandström <lukass@etek.chalmers.se>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This fixes a problem in safe_create_leading_directories() when the
argument starts with a '/' (i.e. the path is absolute).
Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Recent FAT workaround caused compilation trouble on OpenBSD;
different platforms use different error codes when we try to
hardlink the temporary file to its final location. Existing
Coda hack also checks its own error code, but the thing is,
the case we care about is if link failed for a reason other
than that the final file has already existed (which would be
normal, or it could mean collision). So just check the error
code against EEXIST.
Signed-off-by: Junio C Hamano <junkio@cox.net>
FAT -- like Coda -- does not like cross-directory hard links. To be
precise, FAT does not like links at all. But links are not needed either.
So get rid of them.
Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
If we want to re-pack just local packfiles, we need to know whether a
particular object is local or not.
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This starts using the "user.name" and "user.email" config variables if
they exist as the default name and email when committing. This means
that you don't have to use the GIT_COMMITTER_EMAIL environment variable
to override your email - you can just edit the config file instead.
The patch looks bigger than it is because it makes the default name and
email information non-static and renames it appropriately. And it moves
the common git environment variables into a new library file, so that
you can link against libgit.a and get the git environment without having
to link in zlib and libcrypt.
In short, most of it is renaming and moving, the real change core is
just a few new lines in "git_default_config()" that copies the user
config values to the new base.
It also changes "git-var -l" to list the config variables.
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
The http commit walker cannot use the same temporary file
creation code because it needs to use predictable temporary
filename for partial fetch continuation purposes, but the code
to move the temporary file to the final location should be
usable from the ordinary object creation codepath.
Export move_temp_to_file from sha1_file.c and use it, while
losing the custom relink_or_rename function from http-fetch.c.
Also the temporary object file creation part needs to make sure
the leading path exists, in preparation of the really lazy
fan-out directory creation.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This makes it possible to have a "sparse" git object subdirectory
structure, something that has become much more attractive now that people
use pack-files all the time.
As a result of pack-files, a git object directory doesn't necessarily have
any individual objects lying around, and in that case it's just wasting
space to keep the empty first-level object directories around: on many
filesystems the 256 empty directories will be aboue 1MB of diskspace.
Even more importantly, after you re-pack a project that _used_ to be
unpacked, you could be left with huge directories that no longer contain
anything, but that waste space and take time to look through.
With this change, "git prune-packed" can just do an rmdir() on the
directories, and they'll get removed if empty, and re-created on demand.
This patch also tries to fix up "write_sha1_from_fd()" to use the new
common infrastructure for creating the object files, closing a hole where
we might otherwise leave half-written objects in the object database.
[jc: I unoptimized the part that really removes the fan-out directories
to ease transition. init-db still wastes 1MB of diskspace to hold 256
empty fan-outs, and prune-packed rmdir()'s the grown but empty directories,
but runs mkdir() immediately after that -- reducing the saving from 150KB
to 146KB. These parts will be re-introduced when everybody has the
on-demand capability.]
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This adds more cruft to diff --git header to record the blob SHA1 and
the mode the patch/diff is intended to be applied against, to help the
receiving end fall back on a three-way merge. The new header looks
like this:
diff --git a/apply.c b/apply.c
index 7be5041..8366082 100644
--- a/apply.c
+++ b/apply.c
@@ -14,6 +14,7 @@
// files that are being modified, but doesn't apply the patch
// --stat does just a diffstat, and doesn't actually apply
+// --show-index-info shows the old and new index info for...
...
Upon receiving such a patch, if the patch did not apply cleanly to the
target tree, the recipient can try to find the matching old objects in
her object database and create a temporary tree, apply the patch to
that temporary tree, and attempt a 3-way merge between the patched
temporary tree and the target tree using the original temporary tree
as the common ancestor.
The patch lifts the code to compute the hash for an on-filesystem
object from update-index.c and makes it available to the diff output
routine.
Signed-off-by: Junio C Hamano <junkio@cox.net>
The core part detected and died upon seeing a corrupted packfile, but
did not help the user by telling which packfile is corrupt and how.
Signed-off-by: Junio C Hamano <junkio@cox.net>
An entry in the alternates file can name a directory relative to
the object store it describes. A typical linux-2.6 maintainer
repository would have "../../../torvalds/linux-2.6.git/objects" there,
because the subsystem maintainer object store would live in
/pub/scm/linux/kernel/git/$u/$system.git/objects/
and the object store of Linus tree is in
/pub/scm/linux/kernel/git/torvalds/linux-2.6.git/objects/
This unfortunately is different from GIT_ALTERNATE_OBJECT_DIRECTORIES
which is relative to the cwd of the running process, but there is no
way to make it consistent with the behaviour of the environment
variable. The process typically is run in $system.git/ directory for
a naked repository, or one level up for a repository with a working
tree, so we just define it to be relative to the objects/ directory
to be different from either ;-).
Later, the dumb transport could be updated to read from info/alternates
and make requests for the repository the repository borrows from.
Signed-off-by: Junio C Hamano <junkio@cox.net>
We have deprecated the old environment variable names for quite a
while and now it's time to remove them. Gone are:
SHA1_FILE_DIRECTORIES AUTHOR_DATE AUTHOR_EMAIL AUTHOR_NAME
COMMIT_AUTHOR_EMAIL COMMIT_AUTHOR_NAME SHA1_FILE_DIRECTORY
Signed-off-by: Junio C Hamano <junkio@cox.net>
Hi. This patch contains the following possible cleanups:
* Make some needlessly global functions in local-pull.c static
* Change 'char *' to 'const char *' where appropriate
Signed-off-by: Peter Hagervall <hager@cs.umu.se>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Omitting the first branch in ?: is a GNU extension. Cute,
but not supported by other compilers. Replaced mostly
by explicit tests. Calls to getenv() simply are repeated
on non-GNU compilers.
Signed-off-by: Jason Riedy <ejr@cs.berkeley.edu>
Yes, using the same format for the file and the environment variable
was a big mistake. This uses LF as the path separator, and allows
lines that begin with '#' to be comments. ':' is no longer a separator
in objects/info/alternates file.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Note that the pack file has to be in the usual location if it gets
installed later.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
It was a mistake to use GIT_ALTERNATE_OBJECT_DIRECTORIES
environment variable to specify what alternate object pools to
look for missing objects when working with an object database.
It is not a property of the process running the git commands,
but a property of the object database that is partial and needs
other object pools to complete the set of objects it lacks.
This patch allows you to have $GIT_OBJECT_DIRECTORY/info/alternates
whose contents is in exactly the same format as the environment
variable, to let an object database name alternate object pools
it depends on.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This patch fixes the only warning reported by gcc 4.0.1 on Fedora Core 4
for x86_64:
sha1_file.c:1391: warning: pointer targets in assignment differ in
signedness
Signed-off-by: Pavel Roskin <proski@gnu.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
If the object to write was packed, both its uncompressed and compressed
data were leaked. If the object was not packed, its file was not unmapped.
[jc: I think it still leaks on the write error path of
write_sha1_to_fd(), but that should be fixable in a small separate
patch.]
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru>
Signed-off-by: Junio C Hamano <junkio@cox.net>
When following a reference, read_object_with_reference() did not free the
intermediate object data.
Signed-off-by: Sergey Vlasov <vsu@altlinux.ru>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Everybody envies rev-parse, who is the only one that can grok
the extended sha1 format. Move the get_extended_sha1() out of
rev-parse, rename it to get_sha1() and make it available to
everybody else.
The one I posted earlier to the list had one bug where it did
not handle a name that ends with a digit correctly (it
incorrectly tried the "Nth parent" path). This commit fixes it.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This causes ssh-pull to request objects in prefetch() and read then in
fetch(), such that it reduces the unpipelined round-trip time.
This also makes sha1_write_from_fd() support having a buffer of data
which it accidentally read from the fd after the object; this was
formerly not a problem, because it would always get a short read at
the end of an object, because the next object had not been
requested. This is no longer true.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This adds support for reading an uninstalled index, and installing a
pack file that was added while the program was running, as well as
functions for determining where to put the file.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Introduce a new file $GIT_DIR/info/grafts (or $GIT_GRAFT_FILE)
which is a list of "fake commit parent records". Each line of
this file is a commit ID, followed by parent commit IDs, all
40-byte hex SHA1 separated by a single SP in between. The
records override the parent information we would normally read
from the commit objects, allowing both adding "fake" parents
(i.e. grafting), and pretending as if a commit is not a child of
some of its real parents (i.e. cauterizing).
Signed-off-by: Junio C Hamano <junkio@cox.net>
I have reviewed all occurrences of mmap() in git and fixed three types
of errors/defects:
1) The result is not checked.
2) The file descriptor is closed if mmap() succeeds, but not when it
fails.
3) Various casts applied to -1 are used instead of MAP_FAILED, which is
specifically defined to check mmap() return value.
[jc: This is a second round of Pavel's patch. He fixed up the problem
that close() potentially clobbering the errno from mmap, which
the first round had.]
Signed-off-by: Pavel Roskin <proski@gnu.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This reverses the order of object lookup, to check pack index first and
then go to the filesystem to find .git/objects/??/ hierarchy.
When most of the objects are packed, this saves quite many stat() calls
and negative dcache entries; while the price this approach has to pay is
negligible, even when most of the objects are outside pack, because
checking pack index file is quite cheap.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Add write_sha1_to_fd(), which writes an object to a file descriptor. This
includes support for unpacking it and recompressing it.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This patch makes the first half of write_sha1_file() and
index_fd() externally visible, to allow callers to compute the
object ID without actually storing it in the object database.
[JC demangled the whitespaces himself because he liked the patch
so much, and reworked the interface to index_fd() slightly,
taking suggestion from Linus and of his own.]
Signed-off-by: Bryan Larsen <bryan.larsen@gmail.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The function write_one_ref() is passed the list of refs received
from the other end, which was obtained by directory traversal
under $GIT_DIR/refs; this can contain paths other than what
git-init-db prepares and would fail to clone when there is
such.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The function calls opendir() without a matching closedir().
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
GIT_DIR=. ends up being what some of the pack senders use, and we
sometimes messed up when cleaning up the path, ie a ".//HEAD" was
cleaned up into "/HEAD", not "HEAD" like it should be.
We should do some other cleanup, and probably also verify that symlinks
don't point to outside the git area.
"git_path()" returns a static pathname pointer into the git directory
using a printf-like format specifier.
"head_ref()" works like "for_each_ref()", except for just the HEAD.
This implements show_pack_info() function used in verify-pack
command when -v flag is used to obtain something like
unpack-objects used to give when it was first written.
It shows the following for each non-deltified object found in
the pack:
SHA1 type size offset
For deltified objects, it shows this instead:
SHA1 type size offset depth base_sha1
In order to get the output in the order that appear in the pack
file for debugging purposes, you can do this:
$ git-verify-pack -v packfile | sort -n -k 4,4
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Nico pointed out that having verify_pack.c and verify-pack.c was
confusing. Rename verify_pack.c to pack-check.c as suggested,
and enhances the verification done quite a bit.
- Built-in sha1_file unpacking knows that a base object of a
deltified object _must_ be in the same pack, and takes
advantage of that fact.
- Earlier verify-pack command only checked the SHA1 sum for the
entire pack file and did not look into its contents. It now
checks everything idx file claims to have unpacks correctly.
- It now has a hook to give more detailed information for
objects contained in the pack under -v flag.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This teaches packed_delta_info() that it only needs to look at
the type of the base object to figure out both type and size of
a deltified object. This saves quite a many calls to inflate()
when dealing with a deep delta chain.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Given a list of <pack>.idx files, this command validates the
index file and the corresponding .pack file for consistency.
This patch also uses the same validation mechanism in fsck-cache
when the --full flag is used.
During normal operation, sha1_file.c verifies that a given .idx
file matches the .pack file by comparing the SHA1 checksum
stored in .idx file and .pack file as a minimum sanity check.
We may further want to check the pack signature and version when
we map the pack, but that would be a separate patch.
Earlier, errors to map a pack file was not flagged fatal but led
to a random fatal error later. This version explicitly die()s
when such an error is detected.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
If we prefer 0 as maxsize for diff_delta() to say "unlimited", let's be
consistent about it.
This patch also fixes type mismatch in a call to get_delta_hdr_size()
from packed_delta_info().
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This is a wrap-up patch including all the cleanups I've done to the
delta code and its usage. The most important change is the
factorization of the delta header handling code.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This makes it match the new delta encoding, and admittedly makes the
code easier to follow.
This also updates the PACK file version to 2, since this (and the delta
encoding change in the previous commit) are incompatible with the old
format.
The commands git-fsck-cache and probably git-*-pull needs to have a way
to enumerate objects contained in packed GIT archives and alternate
object pools. This commit exposes the data structure used to keep track
of them from sha1_file.c, and adds a couple of accessor interface
functions for use by the enhanced git-fsck-cache command.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This was causing random segfaults, because use_packed_git() got
confused by random garbage there.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This also adds a header with a signature, version info, and the number
of objects to the pack file. It also encodes the file length and type
more efficiently.
The initial one was not doing enough to figure things out
without uncompressing too much. It also fixes a potential
segfault resulting from missing use_packed_git() call.
We would need to introduce unuse_packed_git() call and do proper
use counting to figure out when it is safe to unmap, but
currently we do not unmap packed file yet.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Now, there's still a misfeature there, which is that when you
create a new object, it doesn't check whether that object
already exists in the pack-file, so you'll end up with a few
recent objects that you really don't need (notably tree
objects), and this patch fixes it.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
GIT_OBJECT_DIRECTORY and GIT_ALTERNATE_OBJECT_DIRECTORIES can
have the "pack" subdirectory that houses "packed GIT" files
produced by git-pack-objects (e.g. .git/objects/pack/foo.pack
and .git/objects/pack/foo.idx; always store them as pairs). The
following functions in sha1_file.c can then read object contents
from such packed file:
- sha1_object_info()
- has_sha1_file()
- read_sha1_file()
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This lets us eliminate one use of map_sha1_file() outside
sha1_file.c, to bring us one step closer to the packed GIT.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Packed delta files created by git-pack-objects seems to be the
way to go, and existing "delta" object handling code has exposed
the object representation details to too many places. Remove it
while we refactor code to come up with a proper interface in
sha1_file.c.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Here is a patch that fixes several gcc4 warnings about different signedness,
all between char and unsigned char. I tried to keep the patch minimal
so resertod to casts in three places.
Signed-off-by: Mika Kukkonen <mikukkon@iki.fi>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Make 'sha1' parameters const where possible
Signed-off-by: Jason McMullan <jason.mcmullan@timesys.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This patch adds code to read a hash out of a specified file under
{GIT_DIR}/refs/, and to write such files atomically and optionally with an
compare and lock.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This adds sha1_file_size() helper function and uses it in the
rename/copy similarity estimator. The helper function handles
deltified object as well.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
When a remote repository is deltified, we need to get the
objects that a deltified object we want to obtain is based upon.
The initial parts of each retrieved SHA1 file is inflated and
inspected to see if it is deltified, and its base object is
asked from the remote side when it is. Since this partial
inflation and inspection has a small performance hit, it can
optionally be skipped by giving -d flag to git-*-pull commands.
This flag should be used only when the remote repository is
known to have no deltified objects.
Rsync transport does not have this problem since it fetches
everything the remote side has.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Make a separate helper for parsing the header of an object file
(really carefully) and for unpacking the rest. This means that
anybody who uses the "unpack_sha1_header()" interface can easily
look at the header and decide to unpack the rest too, without
doing any extra work.
It's for people who aren't necessarily interested in the whole
unpacked file, but do want to know the header information (size,
type, etc..)
For example, the delta code can use this to figure out whether
an object is already a delta object, and what it is a delta
against, without actually bothering to unpack all of the actual
data in the delta.
Add <limits.h> to the include files handled by "cache.h", and remove
extraneous #include directives from various .c files. The rule is that
"cache.h" gets all the basic stuff, so that we'll have as few system
dependencies as possible.
This makes the core code aware of delta objects and undeltafy them as
needed. The convention is to use read_sha1_file() to have
undeltafication done automatically (most users do that already so this
is transparent).
If the delta object itself has to be accessed then it must be done
through map_sha1_file() and unpack_sha1_file().
In that context mktag.c has been switched to read_sha1_file() as there
is no reason to do the full map+unpack manually.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Fix various things that sparse complains about:
- use NULL instead of 0
- make sure we declare everything properly, or mark it static
- use proper function declarations ("fn(void)" instead of "fn()")
Sparse is always right.
- Raw hashes should be unsigned char.
- String functions want signed char.
- Hash and compress functions want unsigned char.
Signed-off By: Brian Gerst <bgerst@didntduck.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
During the mailing list discussion on renaming GIT_ environment
variables, people felt that having one environment that lets the
user (or Porcelain) specify both SHA1_FILE_DIRECTORY (now
GIT_OBJECT_DIRECTORY) and GIT_INDEX_FILE for the default layout
would be handy. This change introduces GIT_DIR environment
variable, from which the defaults for GIT_INDEX_FILE and
GIT_OBJECT_DIRECTORY are derived. When GIT_DIR is not defined,
it defaults to ".git". GIT_INDEX_FILE defaults to
"$GIT_DIR/index" and GIT_OBJECT_DIRECTORY defaults to
"$GIT_DIR/objects".
Special thanks for ideas and discussions go to Petr Baudis and
Daniel Barkalow. Bugs are mine ;-)
Signed-off-by: Junio C Hamano <junkio@cox.net>
H. Peter Anvin mentioned that using SHA1_whatever as an
environment variable name is not nice and we should instead use
names starting with "GIT_" prefix to avoid conflicts. Here is
what this patch does:
* Renames the following environment variables:
New name Old Name
GIT_AUTHOR_DATE AUTHOR_DATE
GIT_AUTHOR_EMAIL AUTHOR_EMAIL
GIT_AUTHOR_NAME AUTHOR_NAME
GIT_COMMITTER_EMAIL COMMIT_AUTHOR_EMAIL
GIT_COMMITTER_NAME COMMIT_AUTHOR_NAME
GIT_ALTERNATE_OBJECT_DIRECTORIES SHA1_FILE_DIRECTORIES
GIT_OBJECT_DIRECTORY SHA1_FILE_DIRECTORY
* Introduces a compatibility macro, gitenv(), which does an
getenv() and if it fails calls gitenv_bc(), which in turn
picks up the value from old name while giving a warning about
using an old name.
* Changes all users of the environment variable to fetch
environment variable with the new name using gitenv().
* Updates the documentation and scripts shipped with Linus GIT
distribution.
The transition plan is as follows:
* We will keep the backward compatibility list used by gitenv()
for now, so the current scripts and user environments
continue to work as before. The users will get warnings when
they have old name but not new name in their environment to
the stderr.
* The Porcelain layers should start using new names. However,
just in case it ends up calling old Plumbing layer
implementation, they should also export old names, taking
values from the corresponding new names, during the
transition period.
* After a transition period, we would drop the compatibility
support and drop gitenv(). Revert the callers to directly
call getenv() but keep using the new names.
The last part is probably optional and the transition
duration needs to be set to a reasonable value.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This fixes stylistic problems and one unused variable spotted by
Petr Baudis. The buf variable unused in prepare_alt_odb() is
gone and the "creepy" function is more heavily documented.
Signed-off-by: Junio C Hamano <junkio@cox.net>
A deflate loop in sha1_file.c would have /* nothing */ as its
body, but the semicolon was missing, so the next command was run.
Fortunately the loop went through exactly once so it didn't trigger
an actual bug so far.
Signed-Off-by: Thomas Glanzmann <sithglan@stud.uni-erlangen.de>
Signed-off-by: Petr Baudis <pasky@ucw.cz>
<JC> Editorial Note. We may want to include standard headers in one
of those headers everybody includes, e.g. cache.h, to reduce clutters,
but this commit is as Thomas posted to the GIT list.
Date: Sat, 7 May 2005 10:41:41 +0200
Signed-off-by: Thomas Glanzmann <sithglan@stud.uni-erlangen.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This does not matter for commands that write just a handful SHA1 files,
but is noticeable in git-convert-cache which essentially traverses the
entire object database.
Signed-off-by: Junio C Hamano <junkio@cox.net>
SHA1_FILE_DIRECTORIES environment variable is a colon separated paths
used when looking for SHA1 files not found in the usual place for
reading. Creating a new SHA1 file does not use this alternate object
database location mechanism. This is useful to archive older, rarely
used objects into separate directories.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Make it much safer: we write to a temporary file, and then link that
temporary file to the final destination. This avoids all the nasty
races if several people write the same object at the same time.
It should also result in nicer on-disk layout, since it means that
objects all get created in the same subdirectory. That makes a lot
of block allocation algorithms happier, since the objects will now
be allocated from the same zone.
A new command, git-write-blob, is introduced. This registers
the contents of any file on the filesystem as a blob in the
object database and reports its SHA1 to the standard output.
To implement it, the patch promotes index_fd() from a static
function in update-cache.c to extern and moves it to a library
source, sha1_file.c.
This command is used to update git-merge-one-file-script so that
it does not smudge the work tree.
Signed-off-by: Junio C Hamano <junkio@cox.net>
If somebody wants it later, we can re-do it, but for now we consider
it an experiment that wasn't worth it. Git will still honor symbolic
names, it just won't look up parents for you.
Of course, you can always do it by hand if you want to.
It uses the jit syntax, at least for now. 0-xxxx is the first parent of xxxx,
while 1-xxxx is the second, and so on. You can use just "-xxxx" for the first
parent, but a lot of commands will think that the initial '-' implies a
command line flag.
This allows the programs to use various simplified versions of
the SHA1 names, eg just say "HEAD" for the SHA1 pointed to by
the .git/HEAD file etc.
For example, this commit has been done with
git-commit-tree $(git-write-tree) -p HEAD
instead of the traditional "$(cat .git/HEAD)" syntax.
This patch renames read_tree_with_tree_or_commit_sha1() to
read_object_with_reference() and extends it to automatically
dereference not just "commit" objects but "tag" objects. With
this patch, you can say e.g.:
ls-tree $tag
read-tree -m $(merge-base $tag $HEAD) $tag $HEAD
diff-cache $tag
diff-tree $tag $HEAD
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Introduce xmalloc and xrealloc to die gracefully with a descriptive
message when out of memory, rather than taking a SIGSEGV.
Signed-off-by: Christopher Li<chrislgit@chrisli.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Here is how to trigger it:
echo blob 100 > .git/objects/00/ae4e8d3208e09f2cf7a38202a126f728cadb49
Then run fsck-cache. It will try to unpack after the header to calculate
the hash, inflate returns total_out == 0 and memcpy() dies.
The patch below seems to work with ZLIB 1.1 and 1.2.
Signed-off-by: Andreas Gal <gal@uci.edu>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This adds two functions: one to check if an object is present in the local
database, and one to add an object to the local database by reading it
from a file descriptor and checking its hash.
Signed-Off-By: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
We really don't care about atime, and it sucks to dirty the
inode cache just for it.
This is more than a one-liner only because we need to be able to
clear the O_NOATIME flag in case some of the objects are owned
by others (in which case open will return EPERM), and because not
everybody has the O_NOATIME flag.
This patch implements read_tree_with_tree_or_commit_sha1(),
which can be used when you are interested in reading an unpacked
raw tree data but you do not know nor care if the SHA1 you
obtained your user is a tree ID or a commit ID. Before this
function's introduction, you would have called read_sha1_file(),
examined its type, parsed it to call read_sha1_file() again if
it is a commit, and verified that the resulting object is a
tree. Instead, this function does that for you. It returns
NULL if the given SHA1 is not either a tree or a commit.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>