The second batch of the transactional ref update series.
* rs/ref-transaction-1: (22 commits)
update-ref --stdin: pass transaction around explicitly
update-ref --stdin: narrow scope of err strbuf
refs.c: make delete_ref use a transaction
refs.c: make prune_ref use a transaction to delete the ref
refs.c: remove lock_ref_sha1
refs.c: remove the update_ref_write function
refs.c: remove the update_ref_lock function
refs.c: make lock_ref_sha1 static
walker.c: use ref transaction for ref updates
fast-import.c: use a ref transaction when dumping tags
receive-pack.c: use a reference transaction for updating the refs
refs.c: change update_ref to use a transaction
branch.c: use ref transaction for all ref updates
fast-import.c: change update_branch to use ref transactions
sequencer.c: use ref transactions for all ref updates
commit.c: use ref transactions for updates
replace.c: use the ref transaction functions for updates
tag.c: use ref transactions when doing updates
refs.c: add transaction.status and track OPEN/CLOSED
refs.c: make ref_transaction_begin take an err argument
...
Wrap all the ref updates inside a transaction.
In the new API there is no distinction between failure to lock and
failure to write a ref. Both can be permanent (e.g., a ref
"refs/heads/topic" is blocking creation of the lock file
"refs/heads/topic/1.lock") or transient (e.g., file system full) and
there's no clear difference in how the client should respond, so
replace the two statuses "failed to lock" and "failed to write" with
a single status "failed to update ref". In both cases a more
detailed message is sent by sideband to diagnose the problem.
Example, before:
error: there are still refs under 'refs/heads/topic'
remote: error: failed to lock refs/heads/topic
To foo
! [remote rejected] HEAD -> topic (failed to lock)
After:
error: there are still refs under 'refs/heads/topic'
remote: error: Cannot lock the ref 'refs/heads/topic'.
To foo
! [remote rejected] HEAD -> topic (failed to update ref)
Signed-off-by: Ronnie Sahlberg <sahlberg@google.com>
Reviewed-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Most struct child_process variables are cleared using memset first after
declaration. Provide a macro, CHILD_PROCESS_INIT, that can be used to
initialize them statically instead. That's shorter, doesn't require a
function call and is slightly more readable (especially given that we
already have STRBUF_INIT, ARGV_ARRAY_INIT etc.).
Helped-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We used to do this so could pass a mutable string to
enter_repo. But since 1c64b48 (enter_repo: do not modify
input, 2011-10-04), this is not necessary.
The resulting code is simpler, and it fixes a minor leak.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* kb/perf-trace:
api-trace.txt: add trace API documentation
progress: simplify performance measurement by using getnanotime()
wt-status: simplify performance measurement by using getnanotime()
git: add performance tracing for git's main() function to debug scripts
trace: add trace_performance facility to debug performance issues
trace: add high resolution timer function to debug performance issues
trace: add 'file:line' to all trace output
trace: move code around, in preparation to file:line output
trace: add current timestamp to all trace output
trace: disable additional trace output for unit tests
trace: add infrastructure to augment trace output with additional info
sha1_file: change GIT_TRACE_PACK_ACCESS logging to use trace API
Documentation/git.txt: improve documentation of 'GIT_TRACE*' variables
trace: improve trace performance
trace: remove redundant printf format attribute
trace: consistently name the format parameter
trace: move trace declarations from cache.h to new trace.h
The trace API currently rechecks the environment variable and reopens the
trace file on every API call. This has the ugly side effect that errors
(e.g. file cannot be opened, or the user specified a relative path) are
also reported on every call. Performance can be improved by about factor
three by remembering the environment state and keeping the file open.
Replace the 'const char *key' parameter in the API with a pointer to a
'struct trace_key' that bundles the environment variable name with
additional, trace-internal state. Change the call sites of these APIs to
use a static 'struct trace_key' instead of a string constant.
In trace.c::get_trace_fd(), save and reuse the file descriptor in 'struct
trace_key'.
Add a 'trace_disable()' API, so that packet_trace() can cleanly disable
tracing when it encounters packed data (instead of using unsetenv()).
Signed-off-by: Karsten Blees <blees@dcon.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is one line shorter, and makes sure the length in the
malloc and copy steps match.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We sometimes write tempfiles of the form "shallow_XXXXXX"
during fetch/push operations with shallow repositories.
Under normal circumstances, we clean up the result when we
are done. However, we do no take steps to clean up after
ourselves when we exit due to die() or signal death.
This patch teaches the tempfile creation code to register
handlers to clean up after ourselves. To handle this, we
change the ownership semantics of the filename returned by
setup_temporary_shallow. It now keeps a copy of the filename
itself, and returns only a const pointer to it.
We can also do away with explicit tempfile removal in the
callers. They all exit not long after finishing with the
file, so they can rely on the auto-cleanup, simplifying the
code.
Note that we keep things simple and maintain only a single
filename to be cleaned. This is sufficient for the current
caller, but we future-proof it with a die("BUG").
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fetching from a shallow-cloned repository used to be forbidden,
primarily because the codepaths involved were not carefully vetted
and we did not bother supporting such usage. This attempts to allow
object transfer out of a shallow-cloned repository in a controlled
way (i.e. the receiver become a shallow repository with truncated
history).
* nd/shallow-clone: (31 commits)
t5537: fix incorrect expectation in test case 10
shallow: remove unused code
send-pack.c: mark a file-local function static
git-clone.txt: remove shallow clone limitations
prune: clean .git/shallow after pruning objects
clone: use git protocol for cloning shallow repo locally
send-pack: support pushing from a shallow clone via http
receive-pack: support pushing to a shallow clone via http
smart-http: support shallow fetch/clone
remote-curl: pass ref SHA-1 to fetch-pack as well
send-pack: support pushing to a shallow clone
receive-pack: allow pushes that update .git/shallow
connected.c: add new variant that runs with --shallow-file
add GIT_SHALLOW_FILE to propagate --shallow-file to subprocesses
receive/send-pack: support pushing from a shallow clone
receive-pack: reorder some code in unpack()
fetch: add --update-shallow to accept refs that update .git/shallow
upload-pack: make sure deepening preserves shallow roots
fetch: support fetching from a shallow repository
clone: support remote shallow repository
...
Commit 58babfff ("shallow.c: the 8 steps to select new commits for
.git/shallow", 05-12-2013) added a function to implement step 5 of
the quoted eight steps, namely 'remove_nonexistent_ours_in_pack()'.
This function implements an optional optimization step in the new
shallow commit selection algorithm. However, this function has no
callers. (The commented out call sites would need to change, in
order to provide information required by the function.)
Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk>
Acked-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The basic 8 steps to update .git/shallow does not fully apply here
because the user may choose to accept just a few refs (while fetch
always accepts all refs). The steps are modified a bit.
1-6. same as before. After calling assign_shallow_commits_to_refs at
step 6, each shallow commit has a bitmap that marks all refs that
require it.
7. mark all "ours" shallow commits that are reachable from any
refs. We will need to do the original step 7 on them later.
8. go over all shallow commit bitmaps, mark refs that require new
shallow commits.
9. setup a strict temporary shallow file to plug all the holes, even
if it may cut some of our history short. This file is used by all
hooks. The hooks could use --shallow-file=$GIT_DIR/shallow to
overcome this and reach everything in current repo.
10. go over the new refs one by one. For each ref, do the reachability
test if it needs a shallow commit on the list from step 7. Remove
it if it's reachable from our refs. Gather all required shallow
commits, run check_everything_connected() with the new ref, then
install them to .git/shallow.
This mode is disabled by default and can be turned on with
receive.shallowupdate
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is the preparation for adding --shallow-file to both
unpack-objects and index-pack. To sum up:
- struct argv_array used instead of const char **
- status/code, ip/child, unpacker/keeper are moved out to function
top level
- successful flow now ends at the end of the function
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If either receive-pack or upload-pack is called on a shallow
repository, shallow commits (*) will be sent after the ref
advertisement (but before the packet flush), so that the receiver has
the full "shape" of the sender's commit graph. This will be needed for
the receiver to update its .git/shallow if necessary.
This breaks the protocol for all clients trying to push to a shallow
repo, or fetch from one. Which is basically the same end result as
today's "is_repository_shallow() && die()" in receive-pack and
upload-pack. New clients will be made aware of shallow upstream and
can make use of this information.
The sender must send all shallow commits that are sent in the
following pack. It may send more shallow commits than necessary.
upload-pack for example may choose to advertise no shallow commits if
it knows in advance that the pack it's going to send contains no
shallow commits. But upload-pack is the server, so we choose the
cheaper way, send full .git/shallow and let the client deal with it.
Smart HTTP is not affected by this patch. Shallow support on
smart-http comes later separately.
(*) A shallow commit is a commit that terminates the revision
walker. It is usually put in .git/shallow in order to keep the
revision walker from going out of bound because there is no
guarantee that objects behind this commit is available.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Leaving only the function definitions and declarations so that any
new topic in flight can still make use of the old functions, replace
existing uses of the prefixcmp() and suffixcmp() with new API
functions.
The change can be recreated by mechanically applying this:
$ git grep -l -e prefixcmp -e suffixcmp -- \*.c |
grep -v strbuf\\.c |
xargs perl -pi -e '
s|!prefixcmp\(|starts_with\(|g;
s|prefixcmp\(|!starts_with\(|g;
s|!suffixcmp\(|ends_with\(|g;
s|suffixcmp\(|!ends_with\(|g;
'
on the result of preparatory changes in this series.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Give "update-refs" a "--stdin" option to read multiple update
requests and perform them in an all-or-none fashion.
* bk/refs-multi-update:
update-ref: add test cases covering --stdin signature
update-ref: support multiple simultaneous updates
refs: add update_refs for multiple simultaneous updates
refs: add function to repack without multiple refs
refs: factor delete_ref loose ref step into a helper
refs: factor update_ref steps into helpers
refs: report ref type from lock_any_ref_for_update
reset: rename update_refs to reset_refs
Expose lock_ref_sha1_basic's type_p argument to callers of
lock_any_ref_for_update. Update all call sites to ignore it by passing
NULL for now.
Signed-off-by: Brad King <brad.king@kitware.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
- From the beginning of push.c in 755225d, 2006-04-29, "thin" option
was enabled by default but could be turned off with --no-thin.
- Then Shawn changed the default to 0 in favor of saving server
resources in a4503a1, 2007-09-09. --no-thin worked great.
- One day later, in 9b28851 Daniel extracted some code from push.c to
create transport.c. He (probably accidentally) flipped the default
value from 0 to 1 in transport_get().
From then on --no-thin is effectively no-op because git-push still
expects the default value to be false and only calls
transport_set_option() when "thin" variable in push.c is true (which
is unnecessary). Correct the code to respect --no-thin by calling
transport_set_option() in both cases.
receive-pack learns about --reject-thin-pack-for-testing option,
which only is for testing purposes, hence no document update.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The definition of "struct ref" in "cache.h", a header file so
central to the system, always confused me. This structure is not
about the local ref used by sha1-name API to name local objects.
It is what refspecs are expanded into, after finding out what refs
the other side has, to define what refs are updated after object
transfer succeeds to what values. It belongs to "remote.h" together
with "struct refspec".
While we are at it, also move the types and functions related to the
Git transport connection to a new header file connect.h
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When receive-pack detects error in the pack header it received in
order to decide which of unpack-objects or index-pack to run, it
returned without closing the error stream, which led to a hang
sideband thread.
* jk/receive-pack-deadlocks-with-early-failure:
receive-pack: close sideband fd on early pack errors
Since commit a22e6f8 (receive-pack: send pack-processing
stderr over sideband, 2012-09-21), receive-pack will start
an async sideband thread to copy the stderr from our
index-pack or unpack-objects child to the client. We hand
the thread's input descriptor to unpack(), which puts it in
the "err" member of the "struct child_process".
After unpack() returns, we use finish_async() to reap the
sideband thread. The thread is only ready to die when it
gets EOF on its pipe, which is connected to the err
descriptor. So we expect all of the write ends of that pipe
to be closed as part of unpack().
Normally, this works fine. After start_command forks, it
closes the parent copy of the descriptor. Then once the
child exits (whether it was successful or not), that closes
the only remaining writer.
However, there is one code-path in unpack() that does not
handle this. Before we decide which of unpack-objects or
index-pack to use, we read the pack header ourselves to see
how many objects it contains. If there is an error here, we
exit without running either sub-command, the pipe descriptor
remains open, and we are in a deadlock, waiting for the
sideband thread to die (which is in turn waiting for us to
close the pipe).
We can fix this by making sure that unpack() always closes
the pipe before returning.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Most of the callers of packet_read_line just read into a
static 1000-byte buffer (callers which handle arbitrary
binary data already use LARGE_PACKET_MAX). This works fine
in practice, because:
1. The only variable-sized data in these lines is a ref
name, and refs tend to be a lot shorter than 1000
characters.
2. When sending ref lines, git-core always limits itself
to 1000 byte packets.
However, the only limit given in the protocol specification
in Documentation/technical/protocol-common.txt is
LARGE_PACKET_MAX; the 1000 byte limit is mentioned only in
pack-protocol.txt, and then only describing what we write,
not as a specific limit for readers.
This patch lets us bump the 1000-byte limit to
LARGE_PACKET_MAX. Even though git-core will never write a
packet where this makes a difference, there are two good
reasons to do this:
1. Other git implementations may have followed
protocol-common.txt and used a larger maximum size. We
don't bump into it in practice because it would involve
very long ref names.
2. We may want to increase the 1000-byte limit one day.
Since packets are transferred before any capabilities,
it's difficult to do this in a backwards-compatible
way. But if we bump the size of buffer the readers can
handle, eventually older versions of git will be
obsolete enough that we can justify bumping the
writers, as well. We don't have plans to do this
anytime soon, but there is no reason not to start the
clock ticking now.
Just bumping all of the reading bufs to LARGE_PACKET_MAX
would waste memory. Instead, since most readers just read
into a temporary buffer anyway, let's provide a single
static buffer that all callers can use. We can further wrap
this detail away by having the packet_read_line wrapper just
use the buffer transparently and return a pointer to the
static storage. That covers most of the cases, and the
remaining ones already read into their own LARGE_PACKET_MAX
buffers.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The packets sent during ref negotiation are all terminated
by newline; even though the code to chomp these newlines is
short, we end up doing it in a lot of places.
This patch teaches packet_read_line to auto-chomp the
trailing newline; this lets us get rid of a lot of inline
chomping code.
As a result, some call-sites which are not reading
line-oriented data (e.g., when reading chunks of packfiles
alongside sideband) transition away from packet_read_line to
the generic packet_read interface. This patch converts all
of the existing callsites.
Since the function signature of packet_read_line does not
change (but its behavior does), there is a possibility of
new callsites being introduced in later commits, silently
introducing an incompatibility. However, since a later
patch in this series will change the signature, such a
commit would have to be merged directly into this commit,
not to the tip of the series; we can therefore ignore the
issue.
This is an internal cleanup and should produce no change of
behavior in the normal case. However, there is one corner
case to note. Callers of packet_read_line have never been
able to tell the difference between a flush packet ("0000")
and an empty packet ("0004"), as both cause packet_read_line
to return a length of 0. Readers treat them identically,
even though Documentation/technical/protocol-common.txt says
we must not; it also says that implementations should not
send an empty pkt-line.
By stripping out the newline before the result gets to the
caller, we will now treat the newline-only packet ("0005\n")
the same as an empty packet, which in turn gets treated like
a flush packet. In practice this doesn't matter, as neither
empty nor newline-only packets are part of git's protocols
(at least not for the line-oriented bits, and readers who
are not expecting line-oriented packets will be calling
packet_read directly, anyway). But even if we do decide to
care about the distinction later, it is orthogonal to this
patch. The right place to tighten would be to stop treating
empty packets as flush packets, and this change does not
make doing so any harder.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is just write_or_die by another name. The one
distinction is that write_or_die will treat EPIPE specially
by suppressing error messages. That's fine, as we die by
SIGPIPE anyway (and in the off chance that it is disabled,
write_or_die will simulate it).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Allow the server side to redact the refs/ namespace it shows to the
client.
Will merge to 'master'.
* jc/hidden-refs:
upload/receive-pack: allow hiding ref hierarchies
upload-pack: simplify request validation
upload-pack: share more code
A repository may have refs that are only used for its internal
bookkeeping purposes that should not be exposed to the others that
come over the network.
Teach upload-pack to omit some refs from its initial advertisement
by paying attention to the uploadpack.hiderefs multi-valued
configuration variable. Do the same to receive-pack via the
receive.hiderefs variable. As a convenient short-hand, allow using
transfer.hiderefs to set the value to both of these variables.
Any ref that is under the hierarchies listed on the value of these
variable is excluded from responses to requests made by "ls-remote",
"fetch", etc. (for upload-pack) and "push" (for receive-pack).
Because these hidden refs do not count as OUR_REF, an attempt to
fetch objects at the tip of them will be rejected, and because these
refs do not get advertised, "git push :" will not see local branches
that have the same name as them as "matching" ones to be sent.
An attempt to update/delete these hidden refs with an explicit
refspec, e.g. "git push origin :refs/hidden/22", is rejected. This
is not a new restriction. To the pusher, it would appear that there
is no such ref, so its push request will conclude with "Now that I
sent you all the data, it is time for you to update the refs. I saw
that the ref did not exist when I started pushing, and I want the
result to point at this commit". The receiving end will apply the
compare-and-swap rule to this request and rejects the push with
"Well, your update request conflicts with somebody else; I see there
is such a ref.", which is the right thing to do. Otherwise a push to
a hidden ref will always be "the last one wins", which is not a good
default.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Create find_hook() function to determine if a given hook exists and is
executable. If it is, the path to the script will be returned,
otherwise NULL is returned.
This encapsulates the tests that are used to check for the existence of
a hook in one place, making it easier to modify those checks if that is
found to be necessary. This also makes it simple for places that can
use a hook to check if a hook exists before doing, possibly lengthy,
setup work which would be pointless if no such hook is present.
The returned value is left as a static value from get_pathname() rather
than a duplicate because it is anticipated that the return value will
either be used as a boolean, immediately added to an argv_array list
which would result in it being duplicated at that point, or used to
actually run the command without much intervening work. Callers which
need to hold onto the returned value for a longer time are expected to
duplicate the return value themselves.
Signed-off-by: Aaron Schrab <aaron@schrab.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Send errors from "unpack-objects" and "index-pack" back to the "git
push" over the git and smart-http protocols, just like it is done
for a push over the ssh protocol.
* jk/receive-pack-unpack-error-to-pusher:
receive-pack: drop "n/a" on unpacker errors
receive-pack: send pack-processing stderr over sideband
receive-pack: redirect unpack-objects stdout to /dev/null
The output from git push currently looks like this:
$ git push dest HEAD
fatal: [some message from index-pack]
error: unpack failed: index-pack abnormal exit
To dest
! [remote rejected] HEAD -> master (n/a (unpacker error))
That n/a is meant to be "the per-ref status is not
available" but the nested parentheses just make it look
ugly. Let's turn the final line into just:
! [remote rejected] HEAD -> master (unpacker error)
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Receive-pack invokes either unpack-objects or index-pack to
handle the incoming pack. However, we do not redirect the
stderr of the sub-processes at all, so it is never seen by
the client. From the initial thread adding sideband support,
which is here:
http://thread.gmane.org/gmane.comp.version-control.git/139471
it is clear that some messages are specifically kept off the
sideband (with the assumption that they are of interest only
to an administrator, not the client). The stderr of the
subprocesses is mentioned in the thread, but it's unclear if
they are included in that group, or were simply forgotten.
However, there are a few good reasons to show them to the
client:
1. In many cases, they are directly about the incoming
packfile (e.g., fsck warnings with --strict, corruption
in the packfile, etc). Without these messages, the
client just gets "unpacker error" with no extra useful
diagnosis.
2. No matter what the cause, we are probably better off
showing the errors to the client. If the client and the
server admin are not the same entity, it is probably
much easier for the client to cut-and-paste the errors
they see than for the admin to try to dig them out of a
log and correlate them with a particular session.
3. Users of the ssh transport typically already see these
stderr messages, as the remote's stderr is copied
literally by ssh. This brings other transports (http,
and push-over-git if you are crazy enough to enable it)
more in line with ssh. As a bonus for ssh users,
because the messages are now fed through the sideband
and printed by the local git, they will have "remote:"
prepended and be properly interleaved with any local
output to stderr.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The unpack-objects command should not generally produce any
output on stdout. However, if it's given extra input after
the packfile, it will spew the remainder to stdout. When
called by receive-pack, this means we will break protocol,
since our stdout is connected to the remote send-pack.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Optimise the "merge-base" computation a bit, and also update its
users that do not need the full merge-base information to call a
cheaper subset.
* jc/merge-bases:
reduce_heads(): reimplement on top of remove_redundant()
merge-base: "--is-ancestor A B"
get_merge_bases_many(): walk from many tips in parallel
in_merge_bases(): use paint_down_to_common()
merge_bases_many(): split out the logic to paint history
in_merge_bases(): omit unnecessary redundant common ancestor reduction
http-push: use in_merge_bases() for fast-forward check
receive-pack: use in_merge_bases() for fast-forward check
in_merge_bases(): support only one "other" commit
* jc/capabilities:
fetch-pack: mention server version with verbose output
parse_feature_request: make it easier to see feature values
fetch-pack: do not ask for unadvertised capabilities
do not send client agent unless server does first
send-pack: fix capability-sending logic
include agent identifier in capability string
Some capabilities were asked by fetch-pack even when upload-pack did
not advertise that they are available. Fix fetch-pack not to do so.
* jc/capabilities:
fetch-pack: mention server version with verbose output
parse_feature_request: make it easier to see feature values
fetch-pack: do not ask for unadvertised capabilities
do not send client agent unless server does first
send-pack: fix capability-sending logic
include agent identifier in capability string
The original computed merge-base between the old commit and the new
commit and checked if the old commit was a merge base between them,
in order to make sure we are fast-forwarding.
Instead, call in_merge_bases(old, new) which does the same.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The standard output channel of receive-pack is a structured protocol
channel, and subprocesses must never be allowed to leak anything
into it by writing to their standard output.
Use RUN_COMMAND_STDOUT_TO_STDERR option to run_command_v_opt() just
like we do when running hooks to prevent output from "gc" leaking to
the standard output.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Instead of having the client advertise a particular version
number in the git protocol, we have managed extensions and
backwards compatibility by having clients and servers
advertise capabilities that they support. This is far more
robust than having each side consult a table of
known versions, and provides sufficient information for the
protocol interaction to complete.
However, it does not allow servers to keep statistics on
which client versions are being used. This information is
not necessary to complete the network request (the
capabilities provide enough information for that), but it
may be helpful to conduct a general survey of client
versions in use.
We already send the client version in the user-agent header
for http requests; adding it here allows us to gather
similar statistics for non-http requests.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Receive runs rev-list --verify-objects in order to detect missing
objects. However, such errors are ignored and overridden later.
Instead, consequently ignore all update commands for which an error has
already been detected.
Some tests in t5504 are obsoleted by this change, because invalid
objects are detected even if fsck is not enabled. Instead, they now test
for different error messages depending on whether or not fsck is turned
on. A better fix would be to force a corruption that will be detected by
fsck but not by rev-list.
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* cb/push-quiet:
t5541: avoid TAP test miscounting
fix push --quiet: add 'quiet' capability to receive-pack
server_supports(): parse feature list more carefully
* cb/push-quiet:
t5541: avoid TAP test miscounting
fix push --quiet: add 'quiet' capability to receive-pack
server_supports(): parse feature list more carefully
* mh/ref-api-less-extra-refs:
write_head_info(): handle "extra refs" locally
show_ref(): remove unused "flag" and "cb_data" arguments
receive-pack: move more work into write_head_info()
Currently, git push --quiet produces some non-error output, e.g.:
$ git push --quiet
Unpacking objects: 100% (3/3), done.
This fixes a bug reported for the fedora git package:
https://bugzilla.redhat.com/show_bug.cgi?id=725593
Reported-by: Jesse Keating <jkeating@redhat.com>
Cc: Todd Zullinger <tmz@pobox.com>
Commit 90a6c7d4 (propagate --quiet to send-pack/receive-pack)
introduced the --quiet option to receive-pack and made send-pack
pass that option. Older versions of receive-pack do not recognize
the option, however, and terminate immediately. The commit was
therefore reverted.
This change instead adds a 'quiet' capability to receive-pack,
which is a backwards compatible.
In addition, this fixes push --quiet via http: A verbosity of 0
means quiet for remote helpers.
Reported-by: Tobias Ulmer <tobiasu@tmux.org>
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We have been carefully choosing feature names used in the protocol
extensions so that the vocabulary does not contain a word that is a
substring of another word, so it is not a real problem, but we have
recently added "quiet" feature word, which would mean we cannot later
add some other word with "quiet" (e.g. "quiet-push"), which is awkward.
Let's make sure that we can eventually be able to do so by teaching the
clients and servers that feature words consist of non whitespace
letters. This parser also allows us to later add features with parameters
e.g. "feature=1.5" (parameter values need to be quoted for whitespaces,
but we will worry about the detauls when we do introduce them).
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The old code basically did:
generate array of SHA1s for alternate refs
for each unique SHA1 in array:
add_extra_ref(".have", sha1)
for each ref (including real refs and extra refs):
show_ref(refname, sha1)
But there is no need to stuff the alternate refs in extra_refs; we can
call show_ref() directly when iterating over the array, then handle
real refs separately. So change the code to:
generate array of SHA1s for alternate refs
for each unique SHA1 in array:
show_ref(".have", sha1)
for each ref (this now only includes real refs):
show_ref(refname, sha1)
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The function is not used as a callback, so it doesn't need these
arguments. Also change its return type to void.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>