This avoids repeating ourselves, and the use of magic
numbers.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
After sending the pack, we call receive_status() which gets
both the "unpack" line and the ref status. Let's break these
into two functions so we can call the first part
independently.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
d64ea0f83b ("git-compat-util: add xstrdup_or_null helper",
2015-01-12) added a handy wrapper that allows us to get a duplicate
of a string or NULL if the original is NULL, but a handful of
codepath predate its introduction or just weren't aware of it.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Replace uses of strbuf_addf() for adding strings with more lightweight
strbuf_addstr() calls.
In http-push.c it becomes easier to see what's going on without having
to verfiy that the definition of PROPFIND_ALL_REQUEST doesn't contain
any format specifiers.
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This implements everything that is required on the client side to make use
of push options from the porcelain push command.
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We start a pack-objects process and then write all of the
positive and negative sha1s to it over a pipe. We do so by
formatting each item into a fixed-size buffer and then
writing each individually. This has two drawbacks:
1. There's some manual computation of the buffer size,
which is not immediately obvious is correct (though it
is).
2. We write() once per sha1, which means a lot more system
calls than are necessary.
We can solve both by wrapping the pipe descriptor in a stdio
handle; this is the same technique used by upload-pack when
serving fetches.
Note that we can also simplify and improve the error
handling here. The original detected a single write error
and broke out of the loop (presumably to avoid writing the
error message over and over), but never actually acted on
seeing an error; we just fed truncated input and took
whatever pack-objects returned.
In practice, this probably didn't matter, as the likely
errors would be caused by pack-objects dying (and we'd
probably just die with SIGPIPE anyway). But we can easily
make this simpler and more robust; the stdio handle keeps an
error flag, which we can check at the end.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If we get an error from pack-objects, we may exit
send_pack() early, before reading the server's status
response. In such a case, we may racily see SIGPIPE from our
async demuxer (which is trying to write that status back to
us), and we'd prefer to continue pushing the error up the
call stack, rather than taking down the whole process with
signal death.
This is safe to do because our demuxer just calls
recv_sideband, whose data writes are all done with
write_or_die(), which will notice SIGPIPE.
We do also write sideband 2 to stderr, and we would no
longer die on SIGPIPE there (if it were piped in the first
place, and if the piped program went away). But that's
probably a good thing, as it likewise should not abort the
push process at all (neither immediately by signal, nor
eventually by reporting failure back to the main thread).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This fixes a deadlock on the client side when pushing a
large number of refs from a corrupted repo. There's a
reproduction script below, but let's start with a
human-readable explanation.
The client side of a push goes something like this:
1. Start an async process to demux sideband coming from
the server.
2. Run pack-objects to send the actual pack, and wait for
its status via finish_command().
3. If pack-objects failed, abort immediately.
4. If pack-objects succeeded, read the per-ref status from
the server, which is actually coming over a pipe from
the demux process started in step 1.
We run finish_async() to wait for and clean up the demux
process in two places. In step 3, if we see an error, we
want it to end early. And after step 4, it should be done
writing any data and we are just cleaning it up.
Let's focus on the error case first. We hand the output
descriptor to the server over to pack-objects. So by the
time it has returned an error to us, it has closed the
descriptor and the server has gotten EOF. The server will
mark all refs as failed with "unpacker error" and send us
back the status for each (followed by EOF).
This status goes to the demuxer thread, which relays it over
a pipe to the main thread. But the main thread never even
tries reading the status. It's trying to bail because of the
pack-objects error, and is waiting for the demuxer thread to
finish. If there are a small number of refs, that's OK; the
demuxer thread writes into the pipe buffer, sees EOF from
the server, and quits. But if there are a large number of
refs, it may block on write() back to the main thread,
leading to a deadlock (the main thread is waiting for the
demuxer to finish, the demuxer is waiting for the main
thread to read).
We can break this deadlock by closing the pipe between the
demuxer and the main thread before calling finish_async().
Then the demuxer gets a write() error and exits.
The non-error case usually just works, because we will have
read all of the data from the other side. We do close
demux.out already, but we only do so _after_ calling
finish_async(). This is OK because there shouldn't be any
more data coming from the server. But technically we've only
read to a flush packet, and a broken or malicious server
could be sending more cruft. In such a case, we would hit
the same deadlock. Closing the pipe first doesn't affect the
normal case, and means that for a cruft-sending server,
we'll notice a write() error rather than deadlocking.
Note that when write() sees this error, we'll actually
deliver SIGPIPE to the thread, which will take down the
whole process (unless we're compiled with NO_PTHREADS). This
isn't ideal, but it's an improvement over the status quo,
which is deadlocking. And SIGPIPE handling in async threads
is a bigger problem that we can deal with separately.
A simple reproduction for the error case is below. It's
technically racy (we could exit the main process and take
down the async thread with us before it even reads the
status), though in practice it seems to fail pretty
consistently.
git init repo &&
cd repo &&
# make some commits; we need two so we can simulate corruption
# in the history later.
git commit --allow-empty -m one &&
one=$(git rev-parse HEAD) &&
git commit --allow-empty -m two &&
two=$(git rev-parse HEAD) &&
# now make a ton of refs; our goal here is to overflow the pipe buffer
# when reporting the ref status, which will cause the demuxer to block
# on write()
for i in $(seq 20000); do
echo "create refs/heads/this-is-a-really-long-branch-name-$i $two"
done |
git update-ref --stdin &&
# now make a corruption in the history such that pack-objects will fail
rm -vf .git/objects/$(echo $one | sed 's}..}&/}') &&
# and then push the result
git init --bare dst.git &&
git push --mirror dst.git
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Use struct object_id in three fields in struct ref and convert all the
necessary places that use it.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Jeff King <peff@peff.net>
The client side codepaths in "git push" have been cleaned up
and the user can request to perform an optional "signed push",
i.e. sign only when the other end accepts signed push.
* db/push-sign-if-asked:
push: add a config option push.gpgSign for default signed pushes
push: support signing pushes iff the server supports it
builtin/send-pack.c: use parse_options API
config.c: rename git_config_maybe_bool_text and export it as git_parse_maybe_bool
transport: remove git_transport_options.push_cert
gitremote-helpers.txt: document pushcert option
Documentation/git-send-pack.txt: document --signed
Documentation/git-send-pack.txt: wrap long synopsis line
Documentation/git-push.txt: document when --signed may fail
Add a new flag --sign=true (or --sign=false), which means the same
thing as the original --signed (or --no-signed). Give it a third
value --sign=if-asked to tell push and send-pack to send a push
certificate if and only if the server advertised a push cert nonce.
If not, warn the user that their push may not be as secure as they
thought.
Signed-off-by: Dave Borowitz <dborowitz@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Identify parts of the code that knows that we use SHA-1 hash to
name our objects too much, and use (1) symbolic constants instead
of hardcoded 20 as byte count and/or (2) use struct object_id
instead of unsigned char [20] for object names.
* bc/object-id:
apply: convert threeway_stage to object_id
patch-id: convert to use struct object_id
commit: convert parts to struct object_id
diff: convert struct combine_diff_path to object_id
bulk-checkin.c: convert to use struct object_id
zip: use GIT_SHA1_HEXSZ for trailers
archive.c: convert to use struct object_id
bisect.c: convert leaf functions to use struct object_id
define utility functions for object IDs
define a structure for object IDs
The "git push --signed" protocol extension did not limit what the
"nonce" that is a server-chosen string can contain or how long it
can be, which was unnecessarily lax. Limit both the length and the
alphabet to a reasonably small space that can still have enough
entropy.
* jc/push-cert:
push --signed: tighten what the receiving end can ask to sign
Instead of blindly trusting the receiving side to give us a sensible
nonce to sign, limit the length (max 256 bytes) and the alphabet
(alnum and a few selected punctuations, enough to encode in base64)
that can be used in nonce.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If --signed is not supported, the error message names the remote
"receiving end". If --atomic is not supported, the error message
names the remote "server". Unify the naming to "receiving end"
as we're in the context of "push".
Signed-off-by: Ralf Thielow <ralf.thielow@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Convert struct commit_graft and necessary local parts of commit.c.
Also, convert several constants based on the hex length of an SHA-1 to
use GIT_SHA1_HEXSZ, and move several magic constants into variables for
readability.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git push" has been taught a "--atomic" option that makes push to
update more than one ref an "all-or-none" affair.
* sb/atomic-push:
Document receive.advertiseatomic
t5543-atomic-push.sh: add basic tests for atomic pushes
push.c: add an --atomic argument
send-pack.c: add --atomic command line argument
send-pack: rename ref_update_to_be_sent to check_to_send_update
receive-pack.c: negotiate atomic push support
receive-pack.c: add execute_commands_atomic function
receive-pack.c: move transaction handling in a central place
receive-pack.c: move iterating over all commands outside execute_commands
receive-pack.c: die instead of error in case of possible future bug
receive-pack.c: shorten the execute_commands loop over all commands
This adds support to send-pack to negotiate and use atomic pushes
iff the server supports it. Atomic pushes are activated by a new command
line flag --atomic.
In order to do this we also need to change the semantics for send_pack()
slightly. The existing send_pack() function actually doesn't send all the
refs back to the server when multiple refs are involved, for example
when using --all. Several of the failure modes for pushes can already be
detected locally in the send_pack client based on the information from the
initial server side list of all the refs as generated by receive-pack.
Any such refs that we thus know would fail to push are thus pruned from
the list of refs we send to the server to update.
For atomic pushes, we have to deal thus with both failures that are detected
locally as well as failures that are reported back from the server. In order
to do so we treat all local failures as push failures too.
We introduce a new status code REF_STATUS_ATOMIC_PUSH_FAILED so we can
flag all refs that we would normally have tried to push to the server
but we did not due to local failures. This is to improve the error message
back to the end user to flag that "these refs failed to update since the
atomic push operation failed."
Signed-off-by: Ronnie Sahlberg <sahlberg@google.com>
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This renames ref_update_to_be_sent to check_to_send_update and inverts
the meaning of the return value. Having the return value inverted we
can have different values for the error codes. This is useful in a
later patch when we want to know if we hit the CHECK_REF_STATUS_REJECTED
case.
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When fetching into or pushing from a shallow repository, we want to
aggressively mark edges as uninteresting, since this decreases the pack
size. However, aggressively marking edges can negatively affect
performance on large non-shallow repositories with lots of refs.
Teach pack-objects a --shallow option to indicate that we're pushing
from or fetching into a shallow repository. Use
--objects-edge-aggressive only for shallow repositories and otherwise
use --objects-edge, which performs better in the general case. Update
the callers to pass the --shallow option when they are dealing with a
shallow repository.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Allow "git push" request to be signed, so that it can be verified and
audited, using the GPG signature of the person who pushed, that the
tips of branches at a public repository really point the commits
the pusher wanted to, without having to "trust" the server.
* jc/push-cert: (24 commits)
receive-pack::hmac_sha1(): copy the entire SHA-1 hash out
signed push: allow stale nonce in stateless mode
signed push: teach smart-HTTP to pass "git push --signed" around
signed push: fortify against replay attacks
signed push: add "pushee" header to push certificate
signed push: remove duplicated protocol info
send-pack: send feature request on push-cert packet
receive-pack: GPG-validate push certificates
push: the beginning of "git push --signed"
pack-protocol doc: typofix for PKT-LINE
gpg-interface: move parse_signature() to where it should be
gpg-interface: move parse_gpg_output() to where it should be
send-pack: clarify that cmds_sent is a boolean
send-pack: refactor inspecting and resetting status and sending commands
send-pack: rename "new_refs" to "need_pack_data"
receive-pack: factor out capability string generation
send-pack: factor out capability string generation
send-pack: always send capabilities
send-pack: refactor decision to send update per ref
send-pack: move REF_STATUS_REJECT_NODELETE logic a bit higher
...
In order to prevent a valid push certificate for pushing into an
repository from getting replayed in a different push operation, send
a nonce string from the receive-pack process and have the signer
include it in the push certificate. The receiving end uses an HMAC
hash of the path to the repository it serves and the current time
stamp, hashed with a secret seed (the secret seed does not have to
be per-repository but can be defined in /etc/gitconfig) to generate
the nonce, in order to ensure that a random third party cannot forge
a nonce that looks like it originated from it.
The original nonce is exported as GIT_PUSH_CERT_NONCE for the hooks
to examine and match against the value on the "nonce" header in the
certificate to notice a replay, but returned "nonce" header in the
push certificate is examined by receive-pack and the result is
exported as GIT_PUSH_CERT_NONCE_STATUS, whose value would be "OK"
if the nonce recorded in the certificate matches what we expect, so
that the hooks can more easily check.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Record the URL of the intended recipient for a push (after
anonymizing it if it has authentication material) on a new "pushee
URL" header. Because the networking configuration (SSH-tunnels,
proxies, etc.) on the pushing user's side varies, the receiving
repository may not know the single canonical URL all the pushing
users would refer it as (besides, many sites allow pushing over
ssh://host/path and https://host/path protocols to the same
repository but with different local part of the path). So this
value may not be reliably used for replay-attack prevention
purposes, but this will still serve as a human readable hint to
identify the repository the certificate refers to.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
With the interim protocol, we used to send the update commands even
though we already send a signed copy of the same information when
push certificate is in use. Update the send-pack/receive-pack pair
not to do so.
The notable thing on the receive-pack side is that it makes sure
that there is no command sent over the traditional protocol packet
outside the push certificate. Otherwise a pusher can claim to be
pushing one set of ref updates in the signed certificate while
issuing commands to update unrelated refs, and such an update will
evade later audits.
Finally, start documenting the protocol.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We would want to update the interim protocol so that we do not send
the usual update commands when the push certificate feature is in
use, as the same information is in the certificate. Once that
happens, the push-cert packet may become the only protocol command,
but then there is no packet to put the feature request behind, like
we always did.
As we have prepared the receiving end that understands the push-cert
feature to accept the feature request on the first protocol packet
(other than "shallow ", which was an unfortunate historical mistake
that has to come before everything else), we can give the feature
request on the push-cert packet instead of the first update protocol
packet, in preparation for the next step to actually update to the
final protocol.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
While signed tags and commits assert that the objects thusly signed
came from you, who signed these objects, there is not a good way to
assert that you wanted to have a particular object at the tip of a
particular branch. My signing v2.0.1 tag only means I want to call
the version v2.0.1, and it does not mean I want to push it out to my
'master' branch---it is likely that I only want it in 'maint', so
the signature on the object alone is insufficient.
The only assurance to you that 'maint' points at what I wanted to
place there comes from your trust on the hosting site and my
authentication with it, which cannot easily audited later.
Introduce a mechanism that allows you to sign a "push certificate"
(for the lack of better name) every time you push, asserting that
what object you are pushing to update which ref that used to point
at what other object. Think of it as a cryptographic protection for
ref updates, similar to signed tags/commits but working on an
orthogonal axis.
The basic flow based on this mechanism goes like this:
1. You push out your work with "git push --signed".
2. The sending side learns where the remote refs are as usual,
together with what protocol extension the receiving end
supports. If the receiving end does not advertise the protocol
extension "push-cert", an attempt to "git push --signed" fails.
Otherwise, a text file, that looks like the following, is
prepared in core:
certificate version 0.1
pusher Junio C Hamano <gitster@pobox.com> 1315427886 -0700
7339ca65... 21580ecb... refs/heads/master
3793ac56... 12850bec... refs/heads/next
The file begins with a few header lines, which may grow as we
gain more experience. The 'pusher' header records the name of
the signer (the value of user.signingkey configuration variable,
falling back to GIT_COMMITTER_{NAME|EMAIL}) and the time of the
certificate generation. After the header, a blank line follows,
followed by a copy of the protocol message lines.
Each line shows the old and the new object name at the tip of
the ref this push tries to update, in the way identical to how
the underlying "git push" protocol exchange tells the ref
updates to the receiving end (by recording the "old" object
name, the push certificate also protects against replaying). It
is expected that new command packet types other than the
old-new-refname kind will be included in push certificate in the
same way as would appear in the plain vanilla command packets in
unsigned pushes.
The user then is asked to sign this push certificate using GPG,
formatted in a way similar to how signed tag objects are signed,
and the result is sent to the other side (i.e. receive-pack).
In the protocol exchange, this step comes immediately before the
sender tells what the result of the push should be, which in
turn comes before it sends the pack data.
3. When the receiving end sees a push certificate, the certificate
is written out as a blob. The pre-receive hook can learn about
the certificate by checking GIT_PUSH_CERT environment variable,
which, if present, tells the object name of this blob, and make
the decision to allow or reject this push. Additionally, the
post-receive hook can also look at the certificate, which may be
a good place to log all the received certificates for later
audits.
Because a push certificate carry the same information as the usual
command packets in the protocol exchange, we can omit the latter
when a push certificate is in use and reduce the protocol overhead.
This however is not included in this patch to make it easier to
review (in other words, the series at this step should never be
released without the remainder of the series, as it implements an
interim protocol that will be incompatible with the final one).
As such, the documentation update for the protocol is left out of
this step.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We use it to make sure that the feature request is sent only once on
the very first request packet (ignoring the "shallow " line, which
was an unfortunate mistake we cannot retroactively fix with existing
receive-pack already deployed in the field) and we set it to "true"
with cmds_sent++, not because we care about the actual number of
updates sent but because it is merely an idiomatic way.
Set it explicitly to one to clarify that the code that uses this
variable only cares about its zero-ness.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The main loop over remote_refs list inspects the ref status
to see if we need to generate pack data (i.e. a delete-only push
does not need to send any additional data), resets it to "expecting
the status report" state, and formats the actual update commands
to be sent.
Split the former two out of the main loop, as it will become
conditional in later steps.
Besides, we should have code that does real thing here, before the
"Finally, tell the other end!" part ;-)
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The variable counts how many non-deleting command is being sent, but
is only checked with 0-ness to decide if we need to send the pack
data.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A run of 'var ? " var" : ""' fed to a long printf string in a deeply
nested block was hard to read. Move it outside the loop and format
it into a strbuf.
As an added bonus, the trick to add "agent=<agent-name>" by using
two conditionals is replaced by a more readable version.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We tried to avoid sending one extra byte, NUL and nothing behind it
to signal there is no protocol capabilities being sent, on the first
command packet on the wire, but it just made the code look ugly.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A new helper function ref_update_to_be_sent() decides for each ref
if the update is to be sent based on the status previously set by
set_ref_status_for_push() and also if this is a mirrored push.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
20e8b465 (refactor ref status logic for pushing, 2010-01-08)
restructured the code to set status for each ref to be pushed, but
did not quite go far enough. We inspect the status set earlier by
set_refs_status_for_push() and then perform yet another update to
the status of a ref with an otherwise OK status to be deleted to
mark it with REF_STATUS_REJECT_NODELETE when the protocol tells us
never to delete.
Split the latter into a separate loop that comes before we enter the
per-ref loop. This way we would have one less condition to check in
the main loop.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Most struct child_process variables are cleared using memset first after
declaration. Provide a macro, CHILD_PROCESS_INIT, that can be used to
initialize them statically instead. That's shorter, doesn't require a
function call and is slightly more readable (especially given that we
already have STRBUF_INIT, ARGV_ARRAY_INIT etc.).
Helped-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fetching from a shallow-cloned repository used to be forbidden,
primarily because the codepaths involved were not carefully vetted
and we did not bother supporting such usage. This attempts to allow
object transfer out of a shallow-cloned repository in a controlled
way (i.e. the receiver become a shallow repository with truncated
history).
* nd/shallow-clone: (31 commits)
t5537: fix incorrect expectation in test case 10
shallow: remove unused code
send-pack.c: mark a file-local function static
git-clone.txt: remove shallow clone limitations
prune: clean .git/shallow after pruning objects
clone: use git protocol for cloning shallow repo locally
send-pack: support pushing from a shallow clone via http
receive-pack: support pushing to a shallow clone via http
smart-http: support shallow fetch/clone
remote-curl: pass ref SHA-1 to fetch-pack as well
send-pack: support pushing to a shallow clone
receive-pack: allow pushes that update .git/shallow
connected.c: add new variant that runs with --shallow-file
add GIT_SHALLOW_FILE to propagate --shallow-file to subprocesses
receive/send-pack: support pushing from a shallow clone
receive-pack: reorder some code in unpack()
fetch: add --update-shallow to accept refs that update .git/shallow
upload-pack: make sure deepening preserves shallow roots
fetch: support fetching from a shallow repository
clone: support remote shallow repository
...
Commit f2c681cf ("send-pack: support pushing from a shallow clone
via http", 05-12-2013) adds the 'advertise_shallow_grafts_buf'
function as an external symbol.
Noticed by sparse. ("'advertise_shallow_grafts_buf' was not declared.
Should it be static?")
Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Remove a few duplicate implementations of prefix/suffix comparison
functions, and rename them to starts_with and ends_with.
* cc/starts-n-ends-with:
replace {pre,suf}fixcmp() with {starts,ends}_with()
strbuf: introduce starts_with() and ends_with()
builtin/remote: remove postfixcmp() and use suffixcmp() instead
environment: normalize use of prefixcmp() by removing " != 0"
The latter can do everything the former can and is used in many more
places.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Leaving only the function definitions and declarations so that any
new topic in flight can still make use of the old functions, replace
existing uses of the prefixcmp() and suffixcmp() with new API
functions.
The change can be recreated by mechanically applying this:
$ git grep -l -e prefixcmp -e suffixcmp -- \*.c |
grep -v strbuf\\.c |
xargs perl -pi -e '
s|!prefixcmp\(|starts_with\(|g;
s|prefixcmp\(|!starts_with\(|g;
s|!suffixcmp\(|ends_with\(|g;
s|suffixcmp\(|!ends_with\(|g;
'
on the result of preparatory changes in this series.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Up to now git has assumed that all servers are able to fix thin
packs. This is however not always the case.
Document the 'no-thin' capability and prevent send-pack from generating
a thin pack if the server advertises it.
Signed-off-by: Carlos Martín Nieto <cmn@elego.de>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The codepath that send_pack() calls pack_objects() mistakenly
closed the same file descriptor twice, leading to potentially
closing a wrong file descriptor that was opened in the meantime.
* jl/pack-transfer-avoid-double-close:
Clear fd after closing to avoid double-close error
In send_pack(), clear the fd passed to pack_objects() by setting
it to -1, since pack_objects() closes the fd (via a call to
run_command()). Likewise, in get_pack(), clear the fd passed to
run_command().
Not doing so risks having git_transport_push(), caller of
send_pack(), closing the fd again, possibly incorrectly closing
some other open file; or similarly with fetch_refs_from_pack(),
indirect caller of get_pack().
Signed-off-by: Jens Lindström <jl@opera.com>
Acked-by: Jeff King <peff@peff.net>
Acked-by: Duy Nguyen <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This teaches the deepest part of the callchain for "git push" (and
"git send-pack") to enforce "the old value of the ref must be this,
otherwise fail this push" (aka "compare-and-swap" / "--lockref").
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The definition of "struct ref" in "cache.h", a header file so
central to the system, always confused me. This structure is not
about the local ref used by sha1-name API to name local objects.
It is what refspecs are expanded into, after finding out what refs
the other side has, to define what refs are updated after object
transfer succeeds to what values. It belongs to "remote.h" together
with "struct refspec".
While we are at it, also move the types and functions related to the
Git transport connection to a new header file connect.h
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Most of the callers of packet_read_line just read into a
static 1000-byte buffer (callers which handle arbitrary
binary data already use LARGE_PACKET_MAX). This works fine
in practice, because:
1. The only variable-sized data in these lines is a ref
name, and refs tend to be a lot shorter than 1000
characters.
2. When sending ref lines, git-core always limits itself
to 1000 byte packets.
However, the only limit given in the protocol specification
in Documentation/technical/protocol-common.txt is
LARGE_PACKET_MAX; the 1000 byte limit is mentioned only in
pack-protocol.txt, and then only describing what we write,
not as a specific limit for readers.
This patch lets us bump the 1000-byte limit to
LARGE_PACKET_MAX. Even though git-core will never write a
packet where this makes a difference, there are two good
reasons to do this:
1. Other git implementations may have followed
protocol-common.txt and used a larger maximum size. We
don't bump into it in practice because it would involve
very long ref names.
2. We may want to increase the 1000-byte limit one day.
Since packets are transferred before any capabilities,
it's difficult to do this in a backwards-compatible
way. But if we bump the size of buffer the readers can
handle, eventually older versions of git will be
obsolete enough that we can justify bumping the
writers, as well. We don't have plans to do this
anytime soon, but there is no reason not to start the
clock ticking now.
Just bumping all of the reading bufs to LARGE_PACKET_MAX
would waste memory. Instead, since most readers just read
into a temporary buffer anyway, let's provide a single
static buffer that all callers can use. We can further wrap
this detail away by having the packet_read_line wrapper just
use the buffer transparently and return a pointer to the
static storage. That covers most of the cases, and the
remaining ones already read into their own LARGE_PACKET_MAX
buffers.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The packets sent during ref negotiation are all terminated
by newline; even though the code to chomp these newlines is
short, we end up doing it in a lot of places.
This patch teaches packet_read_line to auto-chomp the
trailing newline; this lets us get rid of a lot of inline
chomping code.
As a result, some call-sites which are not reading
line-oriented data (e.g., when reading chunks of packfiles
alongside sideband) transition away from packet_read_line to
the generic packet_read interface. This patch converts all
of the existing callsites.
Since the function signature of packet_read_line does not
change (but its behavior does), there is a possibility of
new callsites being introduced in later commits, silently
introducing an incompatibility. However, since a later
patch in this series will change the signature, such a
commit would have to be merged directly into this commit,
not to the tip of the series; we can therefore ignore the
issue.
This is an internal cleanup and should produce no change of
behavior in the normal case. However, there is one corner
case to note. Callers of packet_read_line have never been
able to tell the difference between a flush packet ("0000")
and an empty packet ("0004"), as both cause packet_read_line
to return a length of 0. Readers treat them identically,
even though Documentation/technical/protocol-common.txt says
we must not; it also says that implementations should not
send an empty pkt-line.
By stripping out the newline before the result gets to the
caller, we will now treat the newline-only packet ("0005\n")
the same as an empty packet, which in turn gets treated like
a flush packet. In practice this doesn't matter, as neither
empty nor newline-only packets are part of git's protocols
(at least not for the line-oriented bits, and readers who
are not expecting line-oriented packets will be calling
packet_read directly, anyway). But even if we do decide to
care about the distinction later, it is orthogonal to this
patch. The right place to tighten would be to stop treating
empty packets as flush packets, and this change does not
make doing so any harder.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is just write_or_die by another name. The one
distinction is that write_or_die will treat EPIPE specially
by suppressing error messages. That's fine, as we die by
SIGPIPE anyway (and in the off chance that it is disabled,
write_or_die will simulate it).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This code predates prefixcmp, so it used memcmp along with
static sizes. Replacing these memcmps with prefixcmp makes
the code much more readable, and the lack of static sizes
will make refactoring it in future patches simpler.
Note that we used to be unnecessarily liberal in parsing the
"unpack" status line, and would accept "unpack ok\njunk". No
version of git has ever produced that, and it violates the
BNF in Documentation/technical/pack-protocol.txt. Let's take
this opportunity to tighten the check by converting the
prefix comparison into a strcmp.
While we're in the area, let's also fix a vague error
message that does not follow our usual conventions (it
writes directly to stderr and does not use the "error:"
prefix).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we push to update an existing ref, if:
* the object at the tip of the remote is not a commit; or
* the object we are pushing is not a commit,
it won't be correct to suggest to fetch, integrate and push again,
as the old and new objects will not "merge". We should explain that
the push must be forced when there is a non-committish object is
involved in such a case.
If we do not have the current object at the tip of the remote, we do
not even know that object, when fetched, is something that can be
merged. In such a case, suggesting to pull first just like
non-fast-forward case may not be technically correct, but in
practice, most such failures are seen when you try to push your work
to a branch without knowing that somebody else already pushed to
update the same branch since you forked, so "pull first" would work
as a suggestion most of the time. And if the object at the tip is
not a commit, "pull first" will fail, without making any permanent
damage. As a side effect, it also makes the error message the user
will get during the next "push" attempt easier to understand, now
the user is aware that a non-commit object is involved.
In these cases, the current code already rejects such a push on the
client end, but we used the same error and advice messages as the
ones used when rejecting a non-fast-forward push, i.e. pull from
there and integrate before pushing again.
Introduce new rejection reasons and reword the messages
appropriately.
[jc: with help by Peff on message details]
Signed-off-by: Junio C Hamano <gitster@pobox.com>
References are allowed to update from one commit-ish to another if the
former is an ancestor of the latter. This behavior is oriented to
branches which are expected to move with commits. Tag references are
expected to be static in a repository, though, thus an update to
something under refs/tags/ should be rejected unless the update is
forced.
Signed-off-by: Chris Rorvick <chris@rorvick.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
send_pack() is used by transport.c, part of libgit.a while it stays in
builtin/send-pack.c. Move it to send-pack.c so that we won't get
undefined reference if a program that uses libgit.a happens to pull it
in.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Jeff King <peff@peff.net>
Also marks some more things as const, as needed.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The list of remote refs in struct transport should be const, because
builtin-fetch will get confused if it changes.
The url in git_connect should be const (and work on a copy) instead of
requiring the caller to copy it.
match_refs doesn't modify the refspecs it gets.
get_fetch_map and get_remote_ref don't change the list they get.
Allow transport get_refs_list methods to modify the struct transport.
Add a function to copy a list of refs, when a function needs a mutable
copy of a const list.
Add a function to check the type of a ref, as per the code in connect.c
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* js/forkexec:
Use the asyncronous function infrastructure to run the content filter.
Avoid a dup2(2) in apply_filter() - start_command() can do it for us.
t0021-conversion.sh: Test that the clean filter really cleans content.
upload-pack: Run rev-list in an asynchronous function.
upload-pack: Move the revision walker into a separate function.
Use the asyncronous function infrastructure in builtin-fetch-pack.c.
Add infrastructure to run a function asynchronously.
upload-pack: Use start_command() to run pack-objects in create_pack_file().
Have start_command() create a pipe to read the stderr of the child.
Use start_comand() in builtin-fetch-pack.c instead of explicit fork/exec.
Use run_command() to spawn external diff programs instead of fork/exec.
Use start_command() to run content filters instead of explicit fork/exec.
Use start_command() in git_connect() instead of explicit fork/exec.
Change git_connect() to return a struct child_process instead of a pid_t.
Conflicts:
builtin-fetch-pack.c
* db/fetch-pack: (60 commits)
Define compat version of mkdtemp for systems lacking it
Avoid scary errors about tagged trees/blobs during git-fetch
fetch: if not fetching from default remote, ignore default merge
Support 'push --dry-run' for http transport
Support 'push --dry-run' for rsync transport
Fix 'push --all branch...' error handling
Fix compilation when NO_CURL is defined
Added a test for fetching remote tags when there is not tags.
Fix a crash in ls-remote when refspec expands into nothing
Remove duplicate ref matches in fetch
Restore default verbosity for http fetches.
fetch/push: readd rsync support
Introduce remove_dir_recursively()
bundle transport: fix an alloc_ref() call
Allow abbreviations in the first refspec to be merged
Prevent send-pack from segfaulting when a branch doesn't match
Cleanup unnecessary break in remote.c
Cleanup style nit of 'x == NULL' in remote.c
Fix memory leaks when disconnecting transport instances
Ensure builtin-fetch honors {fetch,transfer}.unpackLimit
...
This prepares the API of git_connect() and finish_connect() to operate on
a struct child_process. Currently, we just use that object as a placeholder
for the pid that we used to return. A follow-up patch will change the
implementation of git_connect() and finish_connect() to make full use
of the object.
Old code had early-return-on-error checks at the calling sites of
git_connect(), but since git_connect() dies on errors anyway, these checks
were removed.
[sp: Corrected style nit of "conn == NULL" to "!conn"]
Signed-off-by: Johannes Sixt <johannes.sixt@telecom.at>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Previously, we updated the tracking refs (which match refs we
are pushing) while generating the list of refs to send.
However, at that point we don't know whether the refs were
accepted.
Instead, we now wait until we get a response code from the
server. If an error was indicated, we don't update any local
tracking refs. Technically some refs could have been updated
on the remote, but since the local ref update is just an
optimization to avoid an extra fetch, we are better off
erring on the side of correctness.
The user-visible message is now generated much later in the
program, and has been tweaked to make more sense.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* maint:
Document additional 1.5.3.5 fixes in release notes
Avoid 'expr index' on Mac OS X as it isn't supported
filter-branch: update current branch when rewritten
fix filter-branch documentation
helpful error message when send-pack finds no refs in common.
Fix setup_git_directory_gently() with relative GIT_DIR & GIT_WORK_TREE
Correct typos in release notes for 1.5.3.5
There's a number of tricky conflicts between master and
this topic right now due to the rewrite of builtin-push.
Junio must have handled these via rerere; I'd rather not
deal with them again so I'm pre-merging master into the
topic. Besides this topic somehow started to depend on
the strbuf series that was in next, but is now in master.
It no longer compiles on its own without the strbuf API.
* master: (184 commits)
Whip post 1.5.3.4 maintenance series into shape.
Minor usage update in setgitperms.perl
manual: use 'URL' instead of 'url'.
manual: add some markup.
manual: Fix example finding commits referencing given content.
Fix wording in push definition.
Fix some typos, punctuation, missing words, minor markup.
manual: Fix or remove em dashes.
Add a --dry-run option to git-push.
Add a --dry-run option to git-send-pack.
Fix in-place editing functions in convert.c
instaweb: support for Ruby's WEBrick server
instaweb: allow for use of auto-generated scripts
Add 'git-p4 commit' as an alias for 'git-p4 submit'
hg-to-git speedup through selectable repack intervals
git-svn: respect Subversion's [auth] section configuration values
gtksourceview2 support for gitview
fix contrib/hooks/post-receive-email hooks.recipients error message
Support cvs via git-shell
rebase -i: use diff plumbing instead of porcelain
...
Conflicts:
Makefile
builtin-push.c
rsh.c
Implement support for --dry-run, so that it can be used
in calls from git-push. With this flag set, git-send-pack
will not send any updates to the server.
Signed-off-by: Brian Ewins <brian.ewins@gmail.com>
Signed-off-by: Lars Hjemli <hjemli@gmail.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Anyplace we talk about the address of a remote repository we always
refer to it as a URL, especially in the configuration file and
.git/remotes where we call it "remote.$n.url" or start the first
line with "URL:". Calling this value a uri within the internal C
code just doesn't jive well with our commonly accepted terms.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A function intended to be called from builtins updating refs
by locking them before write, specially those that came from
scripts using "git update-ref".
[jc: with minor fixups]
Signed-off-by: Carlos Rica <jasampler@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The function for_each_remote() does exactly what the name
suggests.
The function remote_find_tracking() was extended to be able to
search remote refs for a given local ref. The caller sets
either src or dst (but not both) in the refspec parameter, and
remote_find_tracking() will fill in the other and return 0.
Both changes are required for the next step: simplification of
git-branch's --track functionality.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This means that send-pack and http-push will support pattern refspecs,
so builtin-push.c doesn't have to expand them, and also git push can
just turn --tags into "refs/tags/*", further simplifying
builtin-push.c
check_ref_format() gets a third "conditionally okay" result for
something that's valid as a pattern but not as a particular ref.
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This also adds a --remote option to send-pack, which specifies the
configured remote being used. It is provided automatically by
git-push, and must match the url (which is still needed, since there
could be multiple urls).
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Make git notify the user about host resolution/connection attempts.
This is useful both as a progress indicator on slow links, and helps
reassure the user there are no firewall problems.
Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This mechanically converts strncmp() to use prefixcmp(), but only when
the parameters match specific patterns, so that they can be verified
easily. Leftover from this will be fixed in a separate step, including
idiotic conversions like
if (!strncmp("foo", arg, 3))
=>
if (!(-prefixcmp(arg, "foo")))
This was done by using this script in px.perl
#!/usr/bin/perl -i.bak -p
if (/strncmp\(([^,]+), "([^\\"]*)", (\d+)\)/ && (length($2) == $3)) {
s|strncmp\(([^,]+), "([^\\"]*)", (\d+)\)|prefixcmp($1, "$2")|;
}
if (/strncmp\("([^\\"]*)", ([^,]+), (\d+)\)/ && (length($1) == $3)) {
s|strncmp\("([^\\"]*)", ([^,]+), (\d+)\)|(-prefixcmp($2, "$1"))|;
}
and running:
$ git grep -l strncmp -- '*.c' | xargs perl px.perl
Signed-off-by: Junio C Hamano <junkio@cox.net>
For now it's just to get a more descriptive name. Later we might update the
push protocol to run more than one program on the other end. Moreover this
matches better the corresponding config option remote.<name>. receivepack.
--exec continues to work
Signed-off-by: Uwe Kleine-König <zeisberg@informatik.uni-freiburg.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
add all supported options to Documentation/git-....txt and the usage strings.
Signed-off-by: Uwe Kleine-König <zeisberg@informatik.uni-freiburg.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
We recently introduced a write_in_full() which would either write
the specified object or emit an error message and fail. In order
to fix the read side we now want to introduce a read_in_full()
but without an error emit. This patch cleans up the naming
of this family of calls:
1) convert the existing write_or_whine() to write_or_whine_pipe()
to better indicate its pipe specific nature,
2) convert the existing write_in_full() calls to write_or_whine()
to better indicate its nature,
3) introduce a write_in_full() providing a write or fail semantic,
and
4) convert write_or_whine() and write_or_whine_pipe() to use
write_in_full().
Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
When passing the revisions list to pack-objects we do not check for
errors nor short writes. Introduce a new write_in_full which will
handle short writes and report errors to the caller. Use this to
short cut the send on failure, allowing us to wait for and report
the child in case the failure is its fault.
Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
send-pack builds a pipeline that runs "rev-list | pack-objects"
and sends the output from pack-objects to the other side, while
feeding the input side of that pipe from itself. However, the
file descriptor that is given to this pipeline (so that it can
be dup2(2)'ed into file descriptor 1 of pack-objects) is closed
by the caller before the complex fork+exec dance! Worse yet,
the caller already dup2's it to 1, so the child process did not
even have to.
I do not understand how this code could possibly have been
working, but it somehow was working by accident.
Merging the sliding mmap() code reveals this problem, presumably
because it keeps one extra file descriptor open for a packfile
and changes the way file descriptors are allocated. I am too
tired to diagnose the problem now, but this seems to be a
sensible fix.
Signed-off-by: Junio C Hamano <junkio@cox.net>
"git push $URL HEAD~6" created a bogus ref HEAD~6 immediately
under $GIT_DIR of the remote repository. While we should keep
refspecs that have arbitrary extended SHA-1 expression on the
source side working (e.g. "HEAD~6:refs/tags/yesterday"), we
should not create bogus ref on the other end.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This allows you to say
git send-pack $URL :refs/heads/$branch
to delete the named remote branch. The refspec $src:$dst means
replace the destination ref with the object known as $src on the
local side, so this is a natural extension to make an empty $src
mean "No object" to delete the target.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Currently git-push displays progress status for the local packing of
objects to send, but nothing once it starts to push it over the
connection. Having progress status in that later case is especially
nice when pushing lots of objects over a slow network link.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This adds a "int *flag" parameter to resolve_ref() and makes
for_each_ref() family to call callback function with an extra
"int flag" parameter. They are used to give two bits of
information (REF_ISSYMREF and REF_ISPACKED) about the ref.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This is a long overdue fix to the API for for_each_ref() family
of functions. It allows the callers to specify a callback data
pointer, so that the caller does not have to use static
variables to communicate with the callback funciton.
The updated for_each_ref() family takes a function of type
int (*fn)(const char *, const unsigned char *, void *)
and a void pointer as parameters, and calls the function with
the name of the ref and its SHA-1 with the caller-supplied void
pointer as parameters.
The commit updates two callers, builtin-name-rev.c and
builtin-pack-refs.c as an example.
Signed-off-by: Junio C Hamano <junkio@cox.net>
When we are generating packs to update remote repositories we
want to supply as much information as possible about the revisions
that already exist to rev-list in order optimise the pack as much
as possible. We need to pass two revisions for each branch we are
updating in the remote repository and one for each additional branch.
Where the remote repository has numerous branches we can run out
of command line space to pass them.
Utilise the git-rev-list --stdin mode to allow unlimited numbers
of revision constraints. This allows us to move back to the much
simpler unordered revision selection code.
[jc: added some comments in the code to describe the pipe flow
a bit.]
Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This abstracts away the size of the hash values when copying them
from memory location to memory location, much as the introduction
of hashcmp abstracted away hash value comparsion.
A few call sites were using char* rather than unsigned char* so
I added the cast rather than open hashcpy to be void*. This is a
reasonable tradeoff as most call sites already use unsigned char*
and the existing hashcmp is also declared to be unsigned char*.
[jc: Splitted the patch to "master" part, to be followed by a
patch for merge-recursive.c which is not in "master" yet.
Fixed the cast in the latter hunk to combine-diff.c which was
wrong in the original.
Also converted ones left-over in combine-diff.c, diff-lib.c and
upload-pack.c ]
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Introduces global inline:
hashcmp(const unsigned char *sha1, const unsigned char *sha2)
Uses memcmp for comparison and returns the result based on the length of
the hash name (a future runtime decision).
Acked-by: Alex Riesen <raa.lkml@gmail.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
[jc: I needed to hand merge the changes to the updated codebase,
so the result needs to be checked.]
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This updates the type-enumeration constants introduced to reduce
the memory footprint of "struct object" to match the type bits
already used in the packfile format, by removing the former
(i.e. TYPE_* constant macros) and using the latter (i.e. enum
object_type) throughout the code for consistency.
Eventually we can stop passing around the "type strings"
entirely, and this will help - no confusion about two different
integer enumeration.
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This makes git-peek-remote able to basically do everything that
git-ls-remote does (but obviously just for the native protocol, so no
http[s]: or rsync: support).
The default behaviour is the same, but you can now give a mixture of
"--refs", "--tags" and "--heads" flags, where "--refs" forces
git-peek-remote to only show real refs (ie none of the fakey tag lookups,
but also not the special pseudo-refs like HEAD and MERGE_HEAD).
The "--tags" and "--heads" flags respectively limit the output to just
regular tags and heads, of course.
You can still also ask to limit them by name too.
You can combine the flags, so
git peek-remote --refs --tags .
will show all local _true_ tags, without the generated tag lookups
(compare the output without the "--refs" flag).
And "--tags --heads" will show both tags and heads, but will avoid (for
example) any special refs outside of the standard locations.
I'm also planning on adding a "--ignore-local" flag that allows us to ask
it to ignore any refs that we already have in the local tree, but that's
an independent thing.
All this is obviously gearing up to making "git fetch" cheaper.
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This shrinks "struct object" by a small amount, by getting rid of the
"struct type *" pointer and replacing it with a 3-bit bitfield instead.
In addition, we merge the bitfields and the "flags" field, which
incidentally should also remove a useless 4-byte padding from the object
when in 64-bit mode.
Now, our "struct object" is still too damn large, but it's now less
obviously bloated, and of the remaining fields, only the "util" (which is
not used by most things) is clearly something that should be eventually
discarded.
This shrinks the "git-rev-list --all" memory use by about 2.5% on the
kernel archive (and, perhaps more importantly, on the larger mozilla
archive). That may not sound like much, but I suspect it's more on a
64-bit platform.
There are other remaining inefficiencies (the parent lists, for example,
probably have horrible malloc overhead), but this was pretty obvious.
Most of the patch is just changing the comparison of the "type" pointer
from one of the constant string pointers to the appropriate new TYPE_xxx
small integer constant.
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This makes sure that many commands that take refs on the command
line to honor core.warnambiguousrefs configuration. Earlier,
the commands affected by this patch did not read the
configuration file.
Signed-off-by: Junio C Hamano <junkio@cox.net>