The usual pattern for an argv array is to initialize it,
push in some strings, and then clear it when done. Very
occasionally, though, we must do other exotic things with
the memory, like freeing the list but keeping the strings.
Let's provide a detach function so that callers can make use
of our API to build up the array, and then take ownership of
it.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
CURLAUTH_ANY does not work with proxies which answer unauthenticated requests
with a 307 redirect to an error page instead of a 407 listing supported
authentication methods. Therefore, allow the authentication method to be set
using the environment variable GIT_HTTP_PROXY_AUTHMETHOD or configuration
variables http.proxyAuthmethod and remote.<name>.proxyAuthmethod (in analogy
to http.proxy and remote.<name>.proxy).
The following values are supported:
* anyauth (default)
* basic
* digest
* negotiate
* ntlm
Signed-off-by: Knut Franke <k.franke@science-computing.de>
Signed-off-by: Elia Pinto <gitter.spiros@gmail.com>
Helped-by: Junio C Hamano <gitster@pobox.com>
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Helped-by: Elia Pinto <gitter.spiros@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git daemon" uses "run_command()" without "finish_command()", so it
needs to release resources itself, which it forgot to do.
* rs/daemon-plug-child-leak:
daemon: plug memory leak
run-command: factor out child_process_clear()
Prepare for Git on-disk repository representation to undergo
backward incompatible changes by introducing a new repository
format version "1", with an extension mechanism.
* jk/repository-extension:
introduce "preciousObjects" repository extension
introduce "extensions" form of core.repositoryformatversion
"git daemon" uses "run_command()" without "finish_command()", so it
needs to release resources itself, which it forgot to do.
* rs/daemon-plug-child-leak:
daemon: plug memory leak
run-command: factor out child_process_clear()
Avoid duplication by moving the code to release allocated memory for
arguments and environment to its own function, child_process_clear().
Export it to provide a counterpart to child_process_init().
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Prepare for Git on-disk repository representation to undergo
backward incompatible changes by introducing a new repository
format version "1", with an extension mechanism.
* jk/repository-extension:
introduce "preciousObjects" repository extension
introduce "extensions" form of core.repositoryformatversion
Again, we do not usually process release notes with AsciiDoc, but it
is better to be consistent.
This incidentally reveals breakages left by an ancient 5e00439f
(Documentation: build html for all files in technical and howto,
2012-10-23). The index-format documentation was originally written
to be read as straight text without formatting and when the commit
forced everything in Documentation/ to go through AsciiDoc, it did
not do any adjustment--hence the double-dashes will be seen in the
resulting text that is rendered as preformatted fixed-width without
converted into em-dashes.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The spec is very inconsistent about which PKT-LINE() parts
of the grammar include a LF. On top of that, the code is not
consistent, either (e.g., send-pack does not put newlines
into the ref-update commands it sends).
Let's make explicit the long-standing expectation that we
generally expect pkt-lines to end in a newline, but that
receivers should be lenient. This makes the spec consistent,
and matches what git already does (though it does not always
fulfill the SHOULD).
We do make an exception for the push-cert, where the
receiving code is currently a bit pickier. This is a
reasonable way to be, as the data needs to be byte-for-byte
compatible with what was signed. We _could_ make up some
rules about signing a canonicalized version including
newlines, but that would require a code change, and is out
of scope for this patch.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The gitmodules API accessed from the C code learned to cache stuff
lazily.
* hv/submodule-config:
submodule: allow erroneous values for the fetchRecurseSubmodules option
submodule: use new config API for worktree configurations
submodule: extract functions for config set and lookup
submodule: implement a config API for lookup of .gitmodules values
The "lockfile" API has been rebuilt on top of a new "tempfile" API.
* mh/tempfile:
credential-cache--daemon: use tempfile module
credential-cache--daemon: delete socket from main()
gc: use tempfile module to handle gc.pid file
lock_repo_for_gc(): compute the path to "gc.pid" only once
diff: use tempfile module
setup_temporary_shallow(): use tempfile module
write_shared_index(): use tempfile module
register_tempfile(): new function to handle an existing temporary file
tempfile: add several functions for creating temporary files
prepare_tempfile_object(): new function, extracted from create_tempfile()
tempfile: a new module for handling temporary files
commit_lock_file(): use get_locked_file_path()
lockfile: add accessor get_lock_file_path()
lockfile: add accessors get_lock_file_fd() and get_lock_file_fp()
create_bundle(): duplicate file descriptor to avoid closing it twice
lockfile: move documentation to lockfile.h and lockfile.c
We remove the extracted functions and directly parse into and read out
of the cache. This allows us to have one unified way of accessing
submodule configuration values specific to single submodules. Regardless
whether we need to access a configuration from history or from the
worktree.
Signed-off-by: Heiko Voigt <hvoigt@hvoigt.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In a superproject some commands need to interact with submodules. They
need to query values from the .gitmodules file either from the worktree
of from certain revisions. At the moment this is quite hard since a
caller would need to read the .gitmodules file from the history and then
parse the values. We want to provide an API for this so we have one
place to get values from .gitmodules from any revision (including the
worktree).
The API is realized as a cache which allows us to lazily read
.gitmodules configurations by commit into a runtime cache which can then
be used to easily lookup values from it. Currently only the values for
path or name are stored but it can be extended for any value needed.
It is expected that .gitmodules files do not change often between
commits. Thats why we lookup the .gitmodules sha1 from a commit and then
either lookup an already parsed configuration or parse and cache an
unknown one for each sha1. The cache is lazily build on demand for each
requested commit.
This cache can be used for all purposes which need knowledge about
submodule configurations. Example use cases are:
* Recursive submodule checkout needs to lookup a submodule name from
its path when a submodule first appears. This needs be done before
this configuration exists in the worktree.
* The implementation of submodule support for 'git archive' needs to
lookup the submodule name to generate the archive when given a
revision that is not checked out.
* 'git fetch' when given the --recurse-submodules=on-demand option (or
configuration) needs to lookup submodule names by path from the
database rather than reading from the worktree. For new submodule it
needs to lookup the name from its path to allow cloning new
submodules into the .git folder so they can be checked out without
any network interaction when the user does a checkout of that
revision.
Signed-off-by: Heiko Voigt <hvoigt@hvoigt.net>
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Rearrange/rewrite it somewhat to fit its new environment.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Reimplement 'git pull' in C.
* pt/pull-builtin:
pull: remove redirection to git-pull.sh
pull --rebase: error on no merge candidate cases
pull --rebase: exit early when the working directory is dirty
pull: configure --rebase via branch.<name>.rebase or pull.rebase
pull: teach git pull about --rebase
pull: set reflog message
pull: implement pulling into an unborn branch
pull: fast-forward working tree if head is updated
pull: check if in unresolved merge state
pull: support pull.ff config
pull: error on no merge candidates
pull: pass git-fetch's options to git-fetch
pull: pass git-merge's options to git-merge
pull: pass verbosity, --progress flags to fetch and merge
pull: implement fetch + merge
pull: implement skeletal builtin pull
argv-array: implement argv_array_pushv()
parse-options-cb: implement parse_opt_passthru_argv()
parse-options-cb: implement parse_opt_passthru()
Move machinery to parse human-readable scaled numbers like 1k, 4M,
and 2G as an option parameter's value from pack-objects to
parse-options API, to make it available to other codepaths.
* cb/parse-magnitude:
parse-options: move unsigned long option parsing out of pack-objects.c
test-parse-options: update to handle negative ints
If this extension is used in a repository, then no
operations should run which may drop objects from the object
storage. This can be useful if you are sharing that storage
with other repositories whose refs you cannot see.
For instance, if you do:
$ git clone -s parent child
$ git -C parent config extensions.preciousObjects true
$ git -C parent config core.repositoryformatversion 1
you now have additional safety when running git in the
parent repository. Prunes and repacks will bail with an
error, and `git gc` will skip those operations (it will
continue to pack refs and do other non-object operations).
Older versions of git, when run in the repository, will
fail on every operation.
Note that we do not set the preciousObjects extension by
default when doing a "clone -s", as doing so breaks
backwards compatibility. It is a decision the user should
make explicitly.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Normally we try to avoid bumps of the whole-repository
core.repositoryformatversion field. However, it is
unavoidable if we want to safely change certain aspects of
git in a backwards-incompatible way (e.g., modifying the set
of ref tips that we must traverse to generate a list of
unreachable, safe-to-prune objects).
If we were to bump the repository version for every such
change, then any implementation understanding version `X`
would also have to understand `X-1`, `X-2`, and so forth,
even though the incompatibilities may be in orthogonal parts
of the system, and there is otherwise no reason we cannot
implement one without the other (or more importantly, that
the user cannot choose to use one feature without the other,
weighing the tradeoff in compatibility only for that
particular feature).
This patch documents the existing repositoryformatversion
strategy and introduces a new format, "1", which lets a
repository specify that it must run with an arbitrary set of
extensions. This can be used, for example:
- to inform git that the objects should not be pruned based
only on the reachability of the ref tips (e.g, because it
has "clone --shared" children)
- that the refs are stored in a format besides the usual
"refs" and "packed-refs" directories
Because we bump to format "1", and because format "1"
requires that a running git knows about any extensions
mentioned, we know that older versions of the code will not
do something dangerous when confronted with these new
formats.
For example, if the user chooses to use database storage for
refs, they may set the "extensions.refbackend" config to
"db". Older versions of git will not understand format "1"
and bail. Versions of git which understand "1" but do not
know about "refbackend", or which know about "refbackend"
but not about the "db" backend, will refuse to run. This is
annoying, of course, but much better than the alternative of
claiming that there are no refs in the repository, or
writing to a location that other implementations will not
read.
Note that we are only defining the rules for format 1 here.
We do not ever write format 1 ourselves; it is a tool that
is meant to be used by users and future extensions to
provide safety with older implementations.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The unsigned long option parsing (including 'k'/'m'/'g' suffix
parsing) is more widely applicable. Add support for OPT_MAGNITUDE
to parse-options.h and change pack-objects.c use this support.
The error behavior on parse errors follows that of OPT_INTEGER. The
name of the option that failed to parse is reported with a brief
message describing the expect format for the option argument and
then the full usage message for the command invoked.
This differs from the previous behavior for OPT_ULONG used in
pack-objects for --max-pack-size and --window-memory which used to
display the value supplied in the error message and did not display
the full usage message.
Signed-off-by: Charles Bailey <cbailey32@bloomberg.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we have a null-terminated array, it would be useful to convert it
or append it to an argv_array for further manipulation.
Implement argv_array_pushv() which will push a null-terminated array of
strings on to an argv_array.
Signed-off-by: Paul Tan <pyokagan@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Certain git commands, such as git-pull, are simply wrappers around other
git commands like git-fetch, git-merge and git-rebase. As such, these
wrapper commands will typically need to "pass through" command-line
options of the commands they wrap.
Implement the parse_opt_passthru_argv() parse-options callback, which
will reconstruct all the provided command-line options into an
argv_array, such that it can be passed to another git command. This is
useful for passing command-line options that can be specified multiple
times.
Helped-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Paul Tan <pyokagan@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Certain git commands, such as git-pull, are simply wrappers around other
git commands like git-fetch, git-merge and git-rebase. As such, these
wrapper commands will typically need to "pass through" command-line
options of the commands they wrap.
Implement the parse_opt_passthru() parse-options callback, which will
reconstruct the command-line option into an char* string, such that it
can be passed to another git command.
Helped-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Helped-by: Junio C Hamano <gitster@pobox.com>
Helped-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Paul Tan <pyokagan@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
for_each_ref() callback functions were taught to name the objects
not with "unsigned char sha1[20]" but with "struct object_id".
* bc/object-id: (56 commits)
struct ref_lock: convert old_sha1 member to object_id
warn_if_dangling_symref(): convert local variable "junk" to object_id
each_ref_fn_adapter(): remove adapter
rev_list_insert_ref(): remove unneeded arguments
rev_list_insert_ref_oid(): new function, taking an object_oid
mark_complete(): remove unneeded arguments
mark_complete_oid(): new function, taking an object_oid
clear_marks(): rewrite to take an object_id argument
mark_complete(): rewrite to take an object_id argument
send_ref(): convert local variable "peeled" to object_id
upload-pack: rewrite functions to take object_id arguments
find_symref(): convert local variable "unused" to object_id
find_symref(): rewrite to take an object_id argument
write_one_ref(): rewrite to take an object_id argument
write_refs_to_temp_dir(): convert local variable sha1 to object_id
submodule: rewrite to take an object_id argument
shallow: rewrite functions to take object_id arguments
handle_one_ref(): rewrite to take an object_id argument
add_info_ref(): rewrite to take an object_id argument
handle_one_reflog(): rewrite to take an object_id argument
...
Introduce <branch>@{push} short-hand to denote the remote-tracking
branch that tracks the branch at the remote the <branch> would be
pushed to.
* jk/at-push-sha1:
for-each-ref: accept "%(push)" format
for-each-ref: use skip_prefix instead of starts_with
sha1_name: implement @{push} shorthand
sha1_name: refactor interpret_upstream_mark
sha1_name: refactor upstream_mark
remote.c: add branch_get_push
remote.c: return upstream name from stat_tracking_info
remote.c: untangle error logic in branch_get_upstream
remote.c: report specific errors from branch_get_upstream
remote.c: introduce branch_get_upstream helper
remote.c: hoist read_config into remote_get_1
remote.c: provide per-branch pushremote name
remote.c: hoist branch.*.remote lookup out of remote_get_1
remote.c: drop "remote" pointer from "struct branch"
remote.c: refactor setup of branch->merge list
remote.c: drop default_remote_name variable
"git upload-pack" that serves "git fetch" can be told to serve
commits that are not at the tip of any ref, as long as they are
reachable from a ref, with uploadpack.allowReachableSHA1InWant
configuration variable.
* fm/fetch-raw-sha1:
upload-pack: optionally allow fetching reachable sha1
upload-pack: prepare to extend allow-tip-sha1-in-want
config.txt: clarify allowTipSHA1InWant with camelCase
Teach the index to optionally remember already seen untracked files
to speed up "git status" in a working tree with tons of cruft.
* nd/untracked-cache: (24 commits)
git-status.txt: advertisement for untracked cache
untracked cache: guard and disable on system changes
mingw32: add uname()
t7063: tests for untracked cache
update-index: test the system before enabling untracked cache
update-index: manually enable or disable untracked cache
status: enable untracked cache
untracked-cache: temporarily disable with $GIT_DISABLE_UNTRACKED_CACHE
untracked cache: mark index dirty if untracked cache is updated
untracked cache: print stats with $GIT_TRACE_UNTRACKED_STATS
untracked cache: avoid racy timestamps
read-cache.c: split racy stat test to a separate function
untracked cache: invalidate at index addition or removal
untracked cache: load from UNTR index extension
untracked cache: save to an index extension
ewah: add convenient wrapper ewah_serialize_strbuf()
untracked cache: don't open non-existent .gitignore
untracked cache: mark what dirs should be recursed/saved
untracked cache: record/validate dir mtime and reuse cached output
untracked cache: make a wrapper around {open,read,close}dir()
...
Change typedef each_ref_fn to take a "const struct object_id *oid"
parameter instead of "const unsigned char *sha1".
To aid this transition, implement an adapter that can be used to wrap
old-style functions matching the old typedef, which is now called
"each_ref_sha1_fn"), and make such functions callable via the new
interface. This requires the old function and its cb_data to be
wrapped in a "struct each_ref_fn_sha1_adapter", and that object to be
used as the cb_data for an adapter function, each_ref_fn_adapter().
This is an enormous diff, but most of it consists of simple,
mechanical changes to the sites that call any of the "for_each_ref"
family of functions. Subsequent to this change, the call sites can be
rewritten one by one to use the new interface.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
With uploadpack.allowReachableSHA1InWant configuration option set on the
server side, "git fetch" can make a request with a "want" line that names
an object that has not been advertised (likely to have been obtained out
of band or from a submodule pointer). Only objects reachable from the
branch tips, i.e. the union of advertised branches and branches hidden by
transfer.hideRefs, will be processed. Note that there is an associated
cost of having to walk back the history to check the reachability.
This feature can be used when obtaining the content of a certain commit,
for which the sha1 is known, without the need of cloning the whole
repository, especially if a shallow fetch is used. Useful cases are e.g.
repositories containing large files in the history, fetching only the
needed data for a submodule checkout, when sharing a sha1 without telling
which exact branch it belongs to and in Gerrit, if you think in terms of
commits instead of change numbers. (The Gerrit case has already been
solved through allowTipSHA1InWant as every Gerrit change has a ref.)
Signed-off-by: Fredrik Medley <fredrik.medley@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fix remaining instances where "pack-file" is used instead of
"packfile". Some places remain where we still use "pack-file",
This is the case when we explicitly refer to a file with a
".pack" extension as opposed to a data source providing a pack
data stream.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we create each branch struct, we fill in the
"remote_name" field from the config, and then fill in the
actual "remote" field (with a "struct remote") based on that
name. However, it turns out that nobody really cares about
the latter field. The only two sites that access it at all
are:
1. git-merge, which uses it to notice when the branch does
not have a remote defined. But we can easily replace this
with looking at remote_name instead.
2. remote.c itself, when setting up the @{upstream} merge
config. But we don't need to save the "remote" in the
"struct branch" for that; we can just look it up for
the duration of the operation.
So there is no need to have both fields; they are redundant
with each other (the struct remote contains the name, or you
can look up the struct from the name). It would be nice to
simplify this, especially as we are going to add matching
pushremote config in a future patch (and it would be nice to
keep them consistent).
So which one do we keep and which one do we get rid of?
If we had a lot of callers accessing the struct, it would be
more efficient to keep it (since you have to do a lookup to
go from the name to the struct, but not vice versa). But we
don't have a lot of callers; we have exactly one, so
efficiency doesn't matter. We can decide this based on
simplicity and readability.
And the meaning of the struct value is somewhat unclear. Is
it always the remote matching remote_name? If remote_name is
NULL (i.e., no per-branch config), does the struct fall back
to the "origin" remote, or is it also NULL? These questions
will get even more tricky with pushremotes, whose fallback
behavior is more complicated. So let's just store the name,
which pretty clearly represents the branch.*.remote config.
Any lookup or fallback behavior can then be implemented in
helper functions.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If the user enables untracked cache, then
- move worktree to an unsupported filesystem
- or simply upgrade OS
- or move the whole (portable) disk from one machine to another
- or access a shared fs from another machine
there's no guarantee that untracked cache can still function properly.
Record the worktree location and OS footprint in the cache. If it
changes, err on the safe side and disable the cache. The user can
'update-index --untracked-cache' again to make sure all conditions are
met.
This adds a new requirement that setup_git_directory* must be called
before read_cache() because we need worktree location by then, or the
cache is dropped.
This change does not cover all bases, you can fool it if you try
hard. The point is to stop accidents.
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Helped-by: brian m. carlson <sandals@crustytoothpaste.net>
Helped-by: Torsten Bögershausen <tboegi@web.de>
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Helped-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The strbuf API was explained between the API documentation and in
the header file. Move missing bits to strbuf.h so that programmers
can check only one place for all necessary information.
* jk/strbuf-doc-to-header:
strbuf.h: group documentation for trim functions
strbuf.h: drop boilerplate descriptions of strbuf_split_*
strbuf.h: reorganize api function grouping headers
strbuf.h: format asciidoc code blocks as 4-space indent
strbuf.h: drop asciidoc list formatting from API docs
strbuf.h: unify documentation comments beginnings
strbuf.h: integrate api-strbuf.txt documentation
The error handling functions and conventions are now documented in
the API manual.
* jn/doc-api-errors:
doc: document error handling functions and conventions
"git push" has been taught a "--atomic" option that makes push to
update more than one ref an "all-or-none" affair.
* sb/atomic-push:
Document receive.advertiseatomic
t5543-atomic-push.sh: add basic tests for atomic pushes
push.c: add an --atomic argument
send-pack.c: add --atomic command line argument
send-pack: rename ref_update_to_be_sent to check_to_send_update
receive-pack.c: negotiate atomic push support
receive-pack.c: add execute_commands_atomic function
receive-pack.c: move transaction handling in a central place
receive-pack.c: move iterating over all commands outside execute_commands
receive-pack.c: die instead of error in case of possible future bug
receive-pack.c: shorten the execute_commands loop over all commands
Some of strbuf is documented as comments above functions,
and some is separate in Documentation/technical/api-strbuf.txt.
This makes it annoying to find the appropriate documentation.
We'd rather have it all in one place, which means all in the
text document, or all in the header.
Let's choose the header as that place. Even though the
formatting is not quite as pretty, this keeps the
documentation close to the related code. The hope is that
this makes it easier to find what you want (human-readable
comments are right next to the C declarations), and easier
for writers to keep the documentation up to date.
This is more or less a straight import of the text from
api-strbuf.txt into C comments, complete with asciidoc
formatting. The exceptions are:
1. All comments created in this way are started with "/**"
to indicate they are part of the API documentation. This
may help later with extracting the text to pretty-print
it.
2. Function descriptions do not repeat the function name,
as it is available in the context directly below. So:
`strbuf_add`::
Add data of given length to the buffer.
from api-strbuf.txt becomes:
/**
* Add data of given length to the buffer.
*/
void strbuf_add(struct strbuf *sb, const void *, size_t);
As a result, any block-continuation required in asciidoc
for that list item was dropped in favor of straight
blank-line paragraph (since it is not necessary when we
are not in a list item).
3. There is minor re-wording to integrate existing comments
and api-strbuf text. In each case, I took whichever
version was more descriptive, and eliminated any
redundancies. In one case, for strbuf_addstr, the api
documentation gave its inline definition; I eliminated
this as redundant with the actual definition, which can
be seen directly below the comment.
4. The functions in the header file are re-ordered to match
the ordering of the API documentation, under the
assumption that more thought went into the grouping
there.
Helped-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This adds the atomic protocol option to allow
receive-pack to inform the client that it has
atomic push capability.
This commit makes the functionality introduced
in the previous commits go live for the serving
side. The changes in documentation reflect the
protocol capabilities of the server.
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Credential helpers are asked in turn until one of them give
positive response, which is cumbersome to turn off when you need to
run Git in an automated setting. The credential helper interface
learned to allow a helper to say "stop, don't ask other helpers."
Also GIT_TERMINAL_PROMPT environment can be set to false to disable
our built-in prompt mechanism for passwords.
* jk/credential-quit:
prompt: respect GIT_TERMINAL_PROMPT to disable terminal prompts
credential: let helpers tell us to quit
When we are trying to fill a credential, we loop over the
set of defined credential-helpers, then fall back to running
askpass, and then finally prompt on the terminal. Helpers
which cannot find a credential are free to tell us nothing,
but they cannot currently ask us to stop prompting.
This patch lets them provide a "quit" attribute, which asks
us to stop the process entirely (avoiding running more
helpers, as well as the askpass/terminal prompt).
This has a few possible uses:
1. A helper which prompts the user itself (e.g., in a
dialog) can provide a "cancel" button to the user to
stop further prompts.
2. Some helpers may know that prompting cannot possibly
work. For example, if their role is to broker a ticket
from an external auth system and that auth system
cannot be contacted, there is no point in continuing
(we need a ticket to authenticate, and the user cannot
provide one by typing it in).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The new name is more consistent with the names of other
string_list-related functions.
Suggested-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In addition to fixing trivial and obvious typos, be careful about
the following points:
- Spell ASCII, URL and CRC in ALL CAPS;
- Spell Linux as Capitalized;
- Do not omit periods in "i.e." and "e.g.".
Signed-off-by: Thomas Ackermann <th.acker@arcor.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Similar to args, add a struct argv_array member to struct child_process
that simplifies specifying the environment for children. It is freed
automatically by finish_command() or if start_command() encounters an
error.
Suggested-by: Jeff King <peff@peff.net>
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* mh/lockfile-stdio:
commit_packed_refs(): reimplement using fdopen_lock_file()
dump_marks(): reimplement using fdopen_lock_file()
fdopen_lock_file(): access a lockfile using stdio
The lockfile API and its users have been cleaned up.
* mh/lockfile: (38 commits)
lockfile.h: extract new header file for the functions in lockfile.c
hold_locked_index(): move from lockfile.c to read-cache.c
hold_lock_file_for_append(): restore errno before returning
get_locked_file_path(): new function
lockfile.c: rename static functions
lockfile: rename LOCK_NODEREF to LOCK_NO_DEREF
commit_lock_file_to(): refactor a helper out of commit_lock_file()
trim_last_path_component(): replace last_path_elm()
resolve_symlink(): take a strbuf parameter
resolve_symlink(): use a strbuf for internal scratch space
lockfile: change lock_file::filename into a strbuf
commit_lock_file(): use a strbuf to manage temporary space
try_merge_strategy(): use a statically-allocated lock_file object
try_merge_strategy(): remove redundant lock_file allocation
struct lock_file: declare some fields volatile
lockfile: avoid transitory invalid states
git_config_set_multivar_in_file(): avoid call to rollback_lock_file()
dump_marks(): remove a redundant call to rollback_lock_file()
api-lockfile: document edge cases
commit_lock_file(): rollback lock file on failure to rename
...
Allow "git push" request to be signed, so that it can be verified and
audited, using the GPG signature of the person who pushed, that the
tips of branches at a public repository really point the commits
the pusher wanted to, without having to "trust" the server.
* jc/push-cert: (24 commits)
receive-pack::hmac_sha1(): copy the entire SHA-1 hash out
signed push: allow stale nonce in stateless mode
signed push: teach smart-HTTP to pass "git push --signed" around
signed push: fortify against replay attacks
signed push: add "pushee" header to push certificate
signed push: remove duplicated protocol info
send-pack: send feature request on push-cert packet
receive-pack: GPG-validate push certificates
push: the beginning of "git push --signed"
pack-protocol doc: typofix for PKT-LINE
gpg-interface: move parse_signature() to where it should be
gpg-interface: move parse_gpg_output() to where it should be
send-pack: clarify that cmds_sent is a boolean
send-pack: refactor inspecting and resetting status and sending commands
send-pack: rename "new_refs" to "need_pack_data"
receive-pack: factor out capability string generation
send-pack: factor out capability string generation
send-pack: always send capabilities
send-pack: refactor decision to send update per ref
send-pack: move REF_STATUS_REJECT_NODELETE logic a bit higher
...
Add a new function, fdopen_lock_file(), which returns a FILE pointer
open to the lockfile. If a stream is open on a lock_file object, it is
closed using fclose() on commit, rollback, or close_lock_file().
This change will allow callers to use stdio to write to a lockfile
without having to muck around in the internal representation of the
lock_file object (callers will be rewritten in upcoming commits).
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a function to return the path of the file that is locked by a
lock_file object. This reduces the knowledge that callers have to have
about the lock_file layout.
Suggested-by: Ronnie Sahlberg <sahlberg@google.com>
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This makes it harder to misread the name as LOCK_NODE_REF.
Suggested-by: Torsten Bögershausen <tboegi@web.de>
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
commit_locked_index(), when writing to an alternate index file,
duplicates (poorly) the code in commit_lock_file(). And anyway, it
shouldn't have to know so much about the internal workings of lockfile
objects. So extract a new function commit_lock_file_to() that does the
work common to the two functions, and call it from both
commit_lock_file() and commit_locked_index().
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Document the behavior of commit_lock_file() when it fails, namely
that it rolls back the lock_file object and sets errno
appropriately.
* Document the behavior of rollback_lock_file() when called for a
lock_file object that has already been committed or rolled back,
namely that it is a NOOP.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If closing an open lockfile fails, then we cannot be sure of the
contents of the lockfile, so there is nothing sensible to do but
delete it. This change also insures that the lock_file object is left
in a defined state in this error path (namely, unlocked).
The only caller that is ultimately affected by this change is
try_merge_strategy() -> write_locked_index(), which can call
close_lock_file() via various execution paths. This caller uses a
static lock_file object which previously could have been reused after
a failed close_lock_file() even though it was still in locked state.
This change causes the lock_file object to be unlocked on failure,
thus fixing this error-handling path.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It was previously a bug to call commit_lock_file() with a lock_file
object that was not active (an illegal access would happen within the
function). It was presumably never done, but this would be an easy
programming error to overlook. So before continuing, do a consistency
check that the lock_file object really is locked.
Helped-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Document a couple more functions and the flags argument as used by
hold_lock_file_for_update() and hold_lock_file_for_append().
Reorganize the document to make it more accessible.
Helped-by: Jonathan Nieder <jrnieder@gmail.com>
Helped-by: Junio Hamano <gitster@pobox.com>
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The macro ALLOC_GROW manages several aspects of dynamic memory
allocations for arrays: It performs overprovisioning in order to avoid
reallocations in future calls, updates the allocation size variable,
multiplies the item size and thus allows users to simply specify the
item count, performs the reallocation and updates the array pointer.
Sometimes this is too much. Add the macro REALLOC_ARRAY, which only
takes care of the latter three points and allows users to specfiy the
number of items the array can store. It can increase and also decrease
the size. Using the macro avoid duplicating the variable name and
takes care of the item sizes automatically.
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In order to prevent a valid push certificate for pushing into an
repository from getting replayed in a different push operation, send
a nonce string from the receive-pack process and have the signer
include it in the push certificate. The receiving end uses an HMAC
hash of the path to the repository it serves and the current time
stamp, hashed with a secret seed (the secret seed does not have to
be per-repository but can be defined in /etc/gitconfig) to generate
the nonce, in order to ensure that a random third party cannot forge
a nonce that looks like it originated from it.
The original nonce is exported as GIT_PUSH_CERT_NONCE for the hooks
to examine and match against the value on the "nonce" header in the
certificate to notice a replay, but returned "nonce" header in the
push certificate is examined by receive-pack and the result is
exported as GIT_PUSH_CERT_NONCE_STATUS, whose value would be "OK"
if the nonce recorded in the certificate matches what we expect, so
that the hooks can more easily check.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Record the URL of the intended recipient for a push (after
anonymizing it if it has authentication material) on a new "pushee
URL" header. Because the networking configuration (SSH-tunnels,
proxies, etc.) on the pushing user's side varies, the receiving
repository may not know the single canonical URL all the pushing
users would refer it as (besides, many sites allow pushing over
ssh://host/path and https://host/path protocols to the same
repository but with different local part of the path). So this
value may not be reliably used for replay-attack prevention
purposes, but this will still serve as a human readable hint to
identify the repository the certificate refers to.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
With the interim protocol, we used to send the update commands even
though we already send a signed copy of the same information when
push certificate is in use. Update the send-pack/receive-pack pair
not to do so.
The notable thing on the receive-pack side is that it makes sure
that there is no command sent over the traditional protocol packet
outside the push certificate. Otherwise a pusher can claim to be
pushing one set of ref updates in the signed certificate while
issuing commands to update unrelated refs, and such an update will
evade later audits.
Finally, start documenting the protocol.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Everywhere else we use PKT-LINE to denote the pkt-line formatted
data, but "shallow/deepen" messages are described with PKT_LINE().
Fix them.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Use the new caching config-set API in git_config() calls.
* ta/config-set-1:
add tests for `git_config_get_string_const()`
add a test for semantic errors in config files
rewrite git_config() to use the config-set API
config: add `git_die_config()` to the config-set API
change `git_config()` return value to void
add line number and file name info to `config_set`
config.c: fix accuracy of line number in errors
config.c: mark error and warnings strings for translation
Move strbuf_addchars() to strbuf.c, where it belongs, and make it
available for other callers.
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Reduce the use of fixed sized buffer passed to getcwd() calls
by introducing xgetcwd() helper.
* rs/strbuf-getcwd:
use strbuf_add_absolute_path() to add absolute paths
abspath: convert absolute_path() to strbuf
use xgetcwd() to set $GIT_DIR
use xgetcwd() to get the current directory or die
wrapper: add xgetcwd()
abspath: convert real_path_internal() to strbuf
abspath: use strbuf_getcwd() to remember original working directory
setup: convert setup_git_directory_gently_1 et al. to strbuf
unix-sockets: use strbuf_getcwd()
strbuf: add strbuf_getcwd()
Add in-core caching layer to let us avoid reading the same
configuration files number of times.
* ta/config-set:
test-config: add tests for the config_set API
add `config_set` API for caching config-like files
The implementation sends an LF, but the protocol documentation was
missing this detail.
Signed-off-by: Shawn Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Move most of the code of absolute_path() into the new function
strbuf_add_absolute_path() and in the process transform it to use
struct strbuf and xgetcwd() instead of a PATH_MAX-sized buffer,
which can be too small on some file systems.
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a helper function for initializing those struct child_process
variables for which the macro CHILD_PROCESS_INIT can't be used.
Suggested-by: Jeff King <peff@peff.net>
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Most struct child_process variables are cleared using memset first after
declaration. Provide a macro, CHILD_PROCESS_INIT, that can be used to
initialize them statically instead. That's shorter, doesn't require a
function call and is slightly more readable (especially given that we
already have STRBUF_INIT, ARGV_ARRAY_INIT etc.).
Helped-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add `git_die_config` that dies printing the line number and the file name
of the highest priority value for the configuration variable `key`. A custom
error message is also printed before dying, specified by the caller, which can
be skipped if `err` argument is set to NULL.
It has usage in non-callback based config value retrieval where we can
raise an error and die if there is a semantic error.
For example,
if (!git_config_get_value(key, &value)){
if (!strcmp(value, "foo"))
git_config_die(key, "value: `%s` is illegal", value);
else
/* do work */
}
Signed-off-by: Tanay Abhra <tanayabh@gmail.com>
Reviewed-by: Matthieu Moy <Matthieu.Moy@imag.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Replace TODO introduced in commit 9c3c22 with documentation
explaining Git config API functions for writing configuration
files.
Signed-off-by: Tanay Abhra <tanayabh@gmail.com>
Reviewed-by: Matthieu Moy <Matthieu.Moy@grenoble-inp.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Currently `git_config()` uses a callback mechanism and file rereads for
config values. Due to this approach, it is not uncommon for the config
files to be parsed several times during the run of a git program, with
different callbacks picking out different variables useful to themselves.
Add a `config_set`, that can be used to construct an in-memory cache for
config-like files that the caller specifies (i.e., files like `.gitmodules`,
`~/.gitconfig` etc.). Add two external functions `git_configset_get_value`
and `git_configset_get_value_multi` for querying from the config sets.
`git_configset_get_value` follows `last one wins` semantic (i.e. if there
are multiple matches for the queried key in the files of the configset the
value returned will be the last entry in `value_list`).
`git_configset_get_value_multi` returns a list of values sorted in order of
increasing priority (i.e. last match will be at the end of the list). Add
type specific query functions like `git_configset_get_bool` and similar.
Add a default `config_set`, `the_config_set` to cache all key-value pairs
read from usual config files (repo specific .git/config, user wide
~/.gitconfig, XDG config and the global /etc/gitconfig). `the_config_set`
is populated using `git_config()`.
Add two external functions `git_config_get_value` and
`git_config_get_value_multi` for querying in a non-callback manner from
`the_config_set`. Also, add type specific query functions that are
implemented as a thin wrapper around the `config_set` API.
Signed-off-by: Matthieu Moy <Matthieu.Moy@imag.fr>
Signed-off-by: Tanay Abhra <tanayabh@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add strbuf_getcwd(), which puts the current working directory into a
strbuf. Because it doesn't use a fixed-size buffer it supports
arbitrarily long paths, provided the platform's getcwd() does as well.
At least on Linux and FreeBSD it handles paths longer than PATH_MAX
just fine.
Suggested-by: Karsten Blees <karsten.blees@gmail.com>
Helped-by: Duy Nguyen <pclouds@gmail.com>
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* kb/perf-trace:
api-trace.txt: add trace API documentation
progress: simplify performance measurement by using getnanotime()
wt-status: simplify performance measurement by using getnanotime()
git: add performance tracing for git's main() function to debug scripts
trace: add trace_performance facility to debug performance issues
trace: add high resolution timer function to debug performance issues
trace: add 'file:line' to all trace output
trace: move code around, in preparation to file:line output
trace: add current timestamp to all trace output
trace: disable additional trace output for unit tests
trace: add infrastructure to augment trace output with additional info
sha1_file: change GIT_TRACE_PACK_ACCESS logging to use trace API
Documentation/git.txt: improve documentation of 'GIT_TRACE*' variables
trace: improve trace performance
trace: remove redundant printf format attribute
trace: consistently name the format parameter
trace: move trace declarations from cache.h to new trace.h
* kb/hashmap-updates:
hashmap: add string interning API
hashmap: add simplified hashmap_get_from_hash() API
hashmap: improve struct hashmap member documentation
hashmap: factor out getting a hash code from a SHA1
The string-list API has STRING_LIST_INIT_* macros to be used
to define variables with initializers, but lacks functions
to initialize an uninitialized piece of memory to be used as
a string-list at the run-time.
Introduce `string_list_init()` function for that.
Signed-off-by: Tanay Abhra <tanayabh@gmail.com>
Reviewed-by: Matthieu Moy <Matthieu.Moy@imag.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
An experiment to use two files (the base file and incremental
changes relative to it) to represent the index to reduce I/O cost
of rewriting a large index when only small part of the working tree
changes.
* nd/split-index: (32 commits)
t1700: new tests for split-index mode
t2104: make sure split index mode is off for the version test
read-cache: force split index mode with GIT_TEST_SPLIT_INDEX
read-tree: note about dropping split-index mode or index version
read-tree: force split-index mode off on --index-output
rev-parse: add --shared-index-path to get shared index path
update-index --split-index: do not split if $GIT_DIR is read only
update-index: new options to enable/disable split index mode
split-index: strip pathname of on-disk replaced entries
split-index: do not invalidate cache-tree at read time
split-index: the reading part
split-index: the writing part
read-cache: mark updated entries for split index
read-cache: save deleted entries in split index
read-cache: mark new entries for split index
read-cache: split-index mode
read-cache: save index SHA-1 after reading
entry.c: update cache_changed if refresh_cache is set in checkout_entry()
cache-tree: mark istate->cache_changed on prime_cache_tree()
cache-tree: mark istate->cache_changed on cache tree update
...
Interning short strings with high probability of duplicates can reduce the
memory footprint and speed up comparisons.
Add strintern() and memintern() APIs that use a hashmap to manage the pool
of unique, interned strings.
Note: strintern(getenv()) could be used to sanitize git's use of getenv(),
in case we ever encounter a platform where a call to getenv() invalidates
previous getenv() results (which is allowed by POSIX).
Signed-off-by: Karsten Blees <blees@dcon.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Hashmap entries are typically looked up by just a key. The hashmap_get()
API expects an initialized entry structure instead, to support compound
keys. This flexibility is currently only needed by find_dir_entry() in
name-hash.c (and compat/win32/fscache.c in the msysgit fork). All other
(currently five) call sites of hashmap_get() have to set up a near emtpy
entry structure, resulting in duplicate code like this:
struct hashmap_entry keyentry;
hashmap_entry_init(&keyentry, hash(key));
return hashmap_get(map, &keyentry, key);
Add a hashmap_get_from_hash() API that allows hashmap lookups by just
specifying the key and its hash code, i.e.:
return hashmap_get_from_hash(map, hash(key), key);
Signed-off-by: Karsten Blees <blees@dcon.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Copying the first bytes of a SHA1 is duplicated in six places,
however, the implications (the actual value would depend on the
endianness of the platform) is documented only once.
Add a properly documented API for this.
Signed-off-by: Karsten Blees <blees@dcon.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Propagate the error messages from the webserver better to the
client coming over the HTTP transport.
* jk/http-errors:
http: default text charset to iso-8859-1
remote-curl: reencode http error messages
strbuf: add strbuf_reencode helper
http: optionally extract charset parameter from content-type
http: extract type/subtype portion of content-type
t5550: test display of remote http error messages
t/lib-httpd: use write_script to copy CGI scripts
test-lib: preserve GIT_CURL_VERBOSE from the environment
* jk/argv-array-for-child-process:
argv-array: drop "detach" code
get_importer: use run-command's internal argv_array
get_exporter: use argv_array
get_helper: use run-command's internal argv_array
git_connect: use argv_array
run_column_filter: use argv_array
run-command: store an optional argv_array
This split-index mode is designed to keep write cost proportional to
the number of changes the user has made, not the size of the work
tree. (Read cost is another matter, to be dealt separately.)
This mode stores index info in a pair of $GIT_DIR/index and
$GIT_DIR/sharedindex.<SHA-1>. sharedindex is large and unchanged over
time while "index" is smaller and updated often. Format details are in
index-format.txt, although not everything is implemented in this
patch.
Shared indexes are not automatically removed, because it's unclear if
the shared index is needed by any (even temporary) indexes by just
looking at it. After a while you'll collect stale shared indexes. The
good news is one shared index is useable for long, until
$GIT_DIR/index becomes too big and sluggish that the new shared index
must be created.
The safest way to clean shared indexes is to turn off split index
mode, so shared files are all garbage, delete them all, then turn on
split index mode again.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fixed some minor typos in api-strbuf.txt: 'A' instead of 'An', 'have'
instead of 'has', a overlong line, and 'another' instead of 'an other'.
Signed-off-by: Jeremiah Mahler <jmmahler@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
STRING_LIST_INIT_{NODUP,DUP} initializers list values only
for earlier structure members, relying on the usual
convention in C that the omitted members are initailized to
0, i.e. the former is expanded to the latter:
struct string_list l = STRING_LIST_INIT_DUP;
struct string_list l = { NULL, 0, 0, 1 };
and the last member that is not mentioned (i.e. 'cmp') is
initialized to NULL.
While there is nothing wrong in this construct, spelling out
all the values where the macros are defined will serve also
as a documentation, so let's do so.
Signed-off-by: Tanay Abhra <tanayabh@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is a convenience wrapper around `reencode_string_len`
and `strbuf_attach`.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is a convenience wrapper to call tolower on each
character of the string.
This makes config's lowercase() function obsolete, though
note that because we have a strbuf, we are careful to
operate over the whole strbuf, rather than assuming that a
NUL is the end-of-string.
We could continue to offer a pure-string lowercase, but
there would be no callers (in most pure-string cases, we
actually duplicate and lowercase the duplicate, for which we
have the xstrdup_tolower wrapper).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The highlighting was pretty, but unfortunately, the failure mode
when source-highlight is not installed was that the entire code
block disappears.
See https://bugs.debian.org/745591,
https://bugs.launchpad.net/bugs/1316810.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The argv_array_detach function (and associated free() function) was
really only useful for transferring ownership of the memory to a "struct
child_process". Now that we have an internal argv_array in that struct,
there are no callers left.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
All child_process structs need to point to an argv. For
flexibility, we do not mandate the use of a dynamic
argv_array. However, because the child_process does not own
the memory, this can make memory management with a
separate argv_array difficult.
For example, if a function calls start_command but not
finish_command, the argv memory must persist. The code needs
to arrange to clean up the argv_array separately after
finish_command runs. As a result, some of our code in this
situation just leaks the memory.
To help such cases, this patch adds a built-in argv_array to
the child_process, which gets cleaned up automatically (both
in finish_command and when start_command fails). Callers
may use it if they choose, but can continue to use the raw
argv if they wish.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
API documentation for strbuf does not document strbuf_trim() or
strbuf_ltrim(). Add documentation for these two functions.
Signed-off-by: Brian Gesiak <modocache@gmail.com>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We only said what happens when we find the Git directory under
RUN_SETUP, without saying what happens otherwise.
Signed-off-by: David Turner <dturner@twitter.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Eradicate mistaken use of "nor" (that is, essentially "nor" used
not in "neither A nor B" ;-)) from in-code comments, command output
strings, and documentations.
* jl/nor-or-nand-and:
code and test: fix misuses of "nor"
comments: fix misuses of "nor"
contrib: fix misuses of "nor"
Documentation: fix misuses of "nor"
OPT_SET_PTR() implementation was broken on IL32P64 platforms;
it turns out that the macro is not used by any real user.
* mr/opt-set-ptr:
parse-options: remove unused OPT_SET_PTR
parse-options: add cast to correct pointer type to OPT_SET_PTR
MSVC: fix t0040-parse-options crash
Attempting to deepen a shallow repository by fetching over smart
HTTP transport failed in the protocol exchange, when no-done
extension was used. The fetching side waited for the list of
shallow boundary commits after the sending end stopped talking to
it.
* nd/http-fetch-shallow-fix:
t5537: move http tests out to t5539
fetch-pack: fix deepen shallow over smart http with no-done cap
protocol-capabilities.txt: document no-done
protocol-capabilities.txt: refer multi_ack_detailed back to pack-protocol.txt
pack-protocol.txt: clarify 'obj-id' in the last ACK after 'done'
test: rename http fetch and push test files
Borrow the bitmap index into packfiles from JGit to speed up
enumeration of objects involved in a commit range without having to
fully traverse the history.
* jk/pack-bitmap: (26 commits)
ewah: unconditionally ntohll ewah data
ewah: support platforms that require aligned reads
read-cache: use get_be32 instead of hand-rolled ntoh_l
block-sha1: factor out get_be and put_be wrappers
do not discard revindex when re-preparing packfiles
pack-bitmap: implement optional name_hash cache
t/perf: add tests for pack bitmaps
t: add basic bitmap functionality tests
count-objects: recognize .bitmap in garbage-checking
repack: consider bitmaps when performing repacks
repack: handle optional files created by pack-objects
repack: turn exts array into array-of-struct
repack: stop using magic number for ARRAY_SIZE(exts)
pack-objects: implement bitmap writing
rev-list: add bitmap mode to speed up object lists
pack-objects: use bitmaps when packing objects
pack-objects: split add_object_entry
pack-bitmap: add support for bitmap indexes
documentation: add documentation for the bitmap format
ewah: compressed bitmap implementation
...
Improvements to our hash table to get it to meet the needs of the
msysgit fscache project, with some nice performance improvements.
* kb/fast-hashmap:
name-hash: retire unused index_name_exists()
hashmap.h: use 'unsigned int' for hash-codes everywhere
test-hashmap.c: drop unnecessary #includes
.gitignore: test-hashmap is a generated file
read-cache.c: fix memory leaks caused by removed cache entries
builtin/update-index.c: cleanup update_one
fix 'git update-index --verbose --again' output
remove old hash.[ch] implementation
name-hash.c: remove cache entries instead of marking them CE_UNHASHED
name-hash.c: use new hash map implementation for cache entries
name-hash.c: remove unreferenced directory entries
name-hash.c: use new hash map implementation for directories
diffcore-rename.c: use new hash map implementation
diffcore-rename.c: simplify finding exact renames
diffcore-rename.c: move code around to prepare for the next patch
buitin/describe.c: use new hash map implementation
add a hashtable implementation that supports O(1) removal
submodule: don't access the .gitmodules cache entry after removing it
Turns out that putting 'link:' before the 'http' is actually superfluous
in AsciiDoc, as there's already a predefined macro to handle it.
"http, https, [etc] URLs are rendered using predefined inline macros."
http://www.methods.co.nz/asciidoc/userguide.html#_urls
"Hypertext links to files on the local file system are specified
using the link inline macro."
http://www.methods.co.nz/asciidoc/userguide.html#_linking_to_local_documents
Despite being superfluous, the reference implementation of AsciiDoc
tolerates the extra 'link:' and silently removes it, giving a functioning
link in the generated HTML. However, AsciiDoctor (the Ruby implementation
of AsciiDoc used to render the http://git-scm.com/ site) does /not/ have
this behaviour, and so generates broken links, as can be seen here:
http://git-scm.com/docs/git-cvsimport (links to cvs2git & parsecvs)
http://git-scm.com/docs/git-filter-branch (link to The BFG)
It's worth noting that after this change, the html generated by 'make html'
in the git project is identical, and all links still work.
Signed-off-by: Roberto Tyley <roberto.tyley@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
See 3e63b21 (upload-pack: Implement no-done capability - 2011-03-14)
and 761ecf0 (fetch-pack: Implement no-done capability - 2011-03-14)
for more information.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
pack-protocol.txt explains in detail how multi_ack_detailed works and
what's the difference between no multi_ack, multi_ack and
multi_ack_detailed. No need to repeat here.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It's introduced in 1bd8c8f (git-upload-pack: Support the multi_ack
protocol - 2005-10-28) but probably better documented in the commit
message of 78affc4 (Add multi_ack_detailed capability to
fetch-pack/upload-pack - 2009-10-30).
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
./Documentation/technical/http-protocol.txt was missing from TECH_DOCS in Makefile.
Add it and also improve HTML formatting while still retaining good readability of the ASCII text:
- Use monospace font instead of italicized or roman font for machine output and source text
- Use roman font for things which should be body text
- Use double quotes consistently for "want" and "have" commands
- Use uppercase "C" / "S" consistently for "client" / "server";
also use "C:" / "S:" instead of "(C)" / "(S)" for consistency and
to avoid having formatted "(C)" as copyright symbol in HTML
- Use only spaces and not a combination of tabs and spaces for whitespace
Signed-off-by: Thomas Ackermann <th.acker@arcor.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fetching from a shallow-cloned repository used to be forbidden,
primarily because the codepaths involved were not carefully vetted
and we did not bother supporting such usage. This attempts to allow
object transfer out of a shallow-cloned repository in a controlled
way (i.e. the receiver become a shallow repository with truncated
history).
* nd/shallow-clone: (31 commits)
t5537: fix incorrect expectation in test case 10
shallow: remove unused code
send-pack.c: mark a file-local function static
git-clone.txt: remove shallow clone limitations
prune: clean .git/shallow after pruning objects
clone: use git protocol for cloning shallow repo locally
send-pack: support pushing from a shallow clone via http
receive-pack: support pushing to a shallow clone via http
smart-http: support shallow fetch/clone
remote-curl: pass ref SHA-1 to fetch-pack as well
send-pack: support pushing to a shallow clone
receive-pack: allow pushes that update .git/shallow
connected.c: add new variant that runs with --shallow-file
add GIT_SHALLOW_FILE to propagate --shallow-file to subprocesses
receive/send-pack: support pushing from a shallow clone
receive-pack: reorder some code in unpack()
fetch: add --update-shallow to accept refs that update .git/shallow
upload-pack: make sure deepening preserves shallow roots
fetch: support fetching from a shallow repository
clone: support remote shallow repository
...
Since 2dce956 is_git_command() is a bit slow as it does file I/O in
the call to list_commands_in_dir(). Avoid the file I/O by adding an
early check for the builtin commands.
Signed-off-by: Sebastian Schuberth <sschuberth@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we use pack bitmaps rather than walking the object
graph, we end up with the list of objects to include in the
packfile, but we do not know the path at which any tree or
blob objects would be found.
In a recently packed repository, this is fine. A fetch would
use the paths only as a heuristic in the delta compression
phase, and a fully packed repository should not need to do
much delta compression.
As time passes, though, we may acquire more objects on top
of our large bitmapped pack. If clients fetch frequently,
then they never even look at the bitmapped history, and all
works as usual. However, a client who has not fetched since
the last bitmap repack will have "have" tips in the
bitmapped history, but "want" newer objects.
The bitmaps themselves degrade gracefully in this
circumstance. We manually walk the more recent bits of
history, and then use bitmaps when we hit them.
But we would also like to perform delta compression between
the newer objects and the bitmapped objects (both to delta
against what we know the user already has, but also between
"new" and "old" objects that the user is fetching). The lack
of pathnames makes our delta heuristics much less effective.
This patch adds an optional cache of the 32-bit name_hash
values to the end of the bitmap file. If present, a reader
can use it to match bitmapped and non-bitmapped names during
delta compression.
Here are perf results for p5310:
Test origin/master HEAD^ HEAD
-------------------------------------------------------------------------------------------------
5310.2: repack to disk 36.81(37.82+1.43) 47.70(48.74+1.41) +29.6% 47.75(48.70+1.51) +29.7%
5310.3: simulated clone 30.78(29.70+2.14) 1.08(0.97+0.10) -96.5% 1.07(0.94+0.12) -96.5%
5310.4: simulated fetch 3.16(6.10+0.08) 3.54(10.65+0.06) +12.0% 1.70(3.07+0.06) -46.2%
5310.6: partial bitmap 36.76(43.19+1.81) 6.71(11.25+0.76) -81.7% 4.08(6.26+0.46) -88.9%
You can see that the time spent on an incremental fetch goes
down, as our delta heuristics are able to do their work.
And we save time on the partial bitmap clone for the same
reason.
Signed-off-by: Vicent Marti <tanoku@gmail.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is the technical documentation for the JGit-compatible Bitmap v1
on-disk format.
Signed-off-by: Vicent Marti <tanoku@gmail.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>