Allow "git push" request to be signed, so that it can be verified and
audited, using the GPG signature of the person who pushed, that the
tips of branches at a public repository really point the commits
the pusher wanted to, without having to "trust" the server.
* jc/push-cert: (24 commits)
receive-pack::hmac_sha1(): copy the entire SHA-1 hash out
signed push: allow stale nonce in stateless mode
signed push: teach smart-HTTP to pass "git push --signed" around
signed push: fortify against replay attacks
signed push: add "pushee" header to push certificate
signed push: remove duplicated protocol info
send-pack: send feature request on push-cert packet
receive-pack: GPG-validate push certificates
push: the beginning of "git push --signed"
pack-protocol doc: typofix for PKT-LINE
gpg-interface: move parse_signature() to where it should be
gpg-interface: move parse_gpg_output() to where it should be
send-pack: clarify that cmds_sent is a boolean
send-pack: refactor inspecting and resetting status and sending commands
send-pack: rename "new_refs" to "need_pack_data"
receive-pack: factor out capability string generation
send-pack: factor out capability string generation
send-pack: always send capabilities
send-pack: refactor decision to send update per ref
send-pack: move REF_STATUS_REJECT_NODELETE logic a bit higher
...
Record the URL of the intended recipient for a push (after
anonymizing it if it has authentication material) on a new "pushee
URL" header. Because the networking configuration (SSH-tunnels,
proxies, etc.) on the pushing user's side varies, the receiving
repository may not know the single canonical URL all the pushing
users would refer it as (besides, many sites allow pushing over
ssh://host/path and https://host/path protocols to the same
repository but with different local part of the path). So this
value may not be reliably used for replay-attack prevention
purposes, but this will still serve as a human readable hint to
identify the repository the certificate refers to.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
While signed tags and commits assert that the objects thusly signed
came from you, who signed these objects, there is not a good way to
assert that you wanted to have a particular object at the tip of a
particular branch. My signing v2.0.1 tag only means I want to call
the version v2.0.1, and it does not mean I want to push it out to my
'master' branch---it is likely that I only want it in 'maint', so
the signature on the object alone is insufficient.
The only assurance to you that 'maint' points at what I wanted to
place there comes from your trust on the hosting site and my
authentication with it, which cannot easily audited later.
Introduce a mechanism that allows you to sign a "push certificate"
(for the lack of better name) every time you push, asserting that
what object you are pushing to update which ref that used to point
at what other object. Think of it as a cryptographic protection for
ref updates, similar to signed tags/commits but working on an
orthogonal axis.
The basic flow based on this mechanism goes like this:
1. You push out your work with "git push --signed".
2. The sending side learns where the remote refs are as usual,
together with what protocol extension the receiving end
supports. If the receiving end does not advertise the protocol
extension "push-cert", an attempt to "git push --signed" fails.
Otherwise, a text file, that looks like the following, is
prepared in core:
certificate version 0.1
pusher Junio C Hamano <gitster@pobox.com> 1315427886 -0700
7339ca65... 21580ecb... refs/heads/master
3793ac56... 12850bec... refs/heads/next
The file begins with a few header lines, which may grow as we
gain more experience. The 'pusher' header records the name of
the signer (the value of user.signingkey configuration variable,
falling back to GIT_COMMITTER_{NAME|EMAIL}) and the time of the
certificate generation. After the header, a blank line follows,
followed by a copy of the protocol message lines.
Each line shows the old and the new object name at the tip of
the ref this push tries to update, in the way identical to how
the underlying "git push" protocol exchange tells the ref
updates to the receiving end (by recording the "old" object
name, the push certificate also protects against replaying). It
is expected that new command packet types other than the
old-new-refname kind will be included in push certificate in the
same way as would appear in the plain vanilla command packets in
unsigned pushes.
The user then is asked to sign this push certificate using GPG,
formatted in a way similar to how signed tag objects are signed,
and the result is sent to the other side (i.e. receive-pack).
In the protocol exchange, this step comes immediately before the
sender tells what the result of the push should be, which in
turn comes before it sends the pack data.
3. When the receiving end sees a push certificate, the certificate
is written out as a blob. The pre-receive hook can learn about
the certificate by checking GIT_PUSH_CERT environment variable,
which, if present, tells the object name of this blob, and make
the decision to allow or reject this push. Additionally, the
post-receive hook can also look at the certificate, which may be
a good place to log all the received certificates for later
audits.
Because a push certificate carry the same information as the usual
command packets in the protocol exchange, we can omit the latter
when a push certificate is in use and reduce the protocol overhead.
This however is not included in this patch to make it easier to
review (in other words, the series at this step should never be
released without the remainder of the series, as it implements an
interim protocol that will be incompatible with the final one).
As such, the documentation update for the protocol is left out of
this step.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Most struct child_process variables are cleared using memset first after
declaration. Provide a macro, CHILD_PROCESS_INIT, that can be used to
initialize them statically instead. That's shorter, doesn't require a
function call and is slightly more readable (especially given that we
already have STRBUF_INIT, ARGV_ARRAY_INIT etc.).
Helped-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The function starts by creating a copy of the static buffer
returned by real_path, but forgets to free it in the error
code paths. We can solve this by jumping to the cleanup code
that is already there.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Using memset and then manually setting values of the string-list
members is not future proof as the internal representation of
string-list may change any time.
Use `string_list_init()` or STRING_LIST_INIT_* macros instead of
memset.
Signed-off-by: Tanay Abhra <tanayabh@gmail.com>
Reviewed-by: Matthieu Moy <Matthieu.Moy@imag.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Use the existing argv_array member instead of building the arguments
list using a string array and a strbuf. This way we don't need magic
number constants and allocations are cleaned up for us automatically
by run_command().
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The skip_prefix() function returns a pointer to the content
past the prefix, or NULL if the prefix was not found. While
this is nice and simple, in practice it makes it hard to use
for two reasons:
1. When you want to conditionally skip or keep the string
as-is, you have to introduce a temporary variable.
For example:
tmp = skip_prefix(buf, "foo");
if (tmp)
buf = tmp;
2. It is verbose to check the outcome in a conditional, as
you need extra parentheses to silence compiler
warnings. For example:
if ((cp = skip_prefix(buf, "foo"))
/* do something with cp */
Both of these make it harder to use for long if-chains, and
we tend to use starts_with() instead. However, the first line
of "do something" is often to then skip forward in buf past
the prefix, either using a magic constant or with an extra
strlen(3) (which is generally computed at compile time, but
means we are repeating ourselves).
This patch refactors skip_prefix() to return a simple boolean,
and to provide the pointer value as an out-parameter. If the
prefix is not found, the out-parameter is untouched. This
lets you write:
if (skip_prefix(arg, "foo ", &arg))
do_foo(arg);
else if (skip_prefix(arg, "bar ", &arg))
do_bar(arg);
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When pushing, we do not even look at our push refspecs until
after we have made contact with the remote receive-pack and
gotten its list of refs. This means that we may go to some
work, including asking the user to log in, before realizing
we have simple errors like "git push origin matser".
We cannot catch all refspec problems, since fully evaluating
the refspecs requires knowing what the remote side has. But
we can do a quick sanity check of the local side and catch a
few simple error cases.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fetching from a shallow-cloned repository used to be forbidden,
primarily because the codepaths involved were not carefully vetted
and we did not bother supporting such usage. This attempts to allow
object transfer out of a shallow-cloned repository in a controlled
way (i.e. the receiver become a shallow repository with truncated
history).
* nd/shallow-clone: (31 commits)
t5537: fix incorrect expectation in test case 10
shallow: remove unused code
send-pack.c: mark a file-local function static
git-clone.txt: remove shallow clone limitations
prune: clean .git/shallow after pruning objects
clone: use git protocol for cloning shallow repo locally
send-pack: support pushing from a shallow clone via http
receive-pack: support pushing to a shallow clone via http
smart-http: support shallow fetch/clone
remote-curl: pass ref SHA-1 to fetch-pack as well
send-pack: support pushing to a shallow clone
receive-pack: allow pushes that update .git/shallow
connected.c: add new variant that runs with --shallow-file
add GIT_SHALLOW_FILE to propagate --shallow-file to subprocesses
receive/send-pack: support pushing from a shallow clone
receive-pack: reorder some code in unpack()
fetch: add --update-shallow to accept refs that update .git/shallow
upload-pack: make sure deepening preserves shallow roots
fetch: support fetching from a shallow repository
clone: support remote shallow repository
...
Be more careful when parsing remote repository URL given in the
scp-style host:path notation.
* tb/clone-ssh-with-colon-for-port:
git_connect(): use common return point
connect.c: refactor url parsing
git_connect(): refactor the port handling for ssh
git fetch: support host:/~repo
t5500: add test cases for diag-url
git fetch-pack: add --diag-url
git_connect: factor out discovery of the protocol and its parts
git_connect: remove artificial limit of a remote command
t5601: add tests for ssh
t5601: remove clear_ssh, refactor setup_ssh_wrapper
The same steps are done as in when --update-shallow is not given. The
only difference is we now add all shallow commits in "ours" and
"theirs" to .git/shallow (aka "step 8").
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This patch just put together pieces from the 8 steps patch. We stop at
step 7 and reject refs that require new shallow commits.
Note that, by rejecting refs that require new shallow commits, we
leave dangling objects in the repo, which become "object islands" by
the next "git fetch" of the same source.
If the first fetch our "ours" set is zero and we do practically
nothing at step 7, "ours" is full at the next fetch and we may need to
walk through commits for reachability test. Room for improvement.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Cloning from a shallow repository does not follow the "8 steps for new
.git/shallow" because if it does we need to get through step 6 for all
refs. That means commit walking down to the bottom.
Instead the rule to create .git/shallow is simpler and, more
importantly, cheap: if a shallow commit is found in the pack, it's
probably used (i.e. reachable from some refs), so we add it. Others
are dropped.
One may notice this method seems flawed by the word "probably". A
shallow commit may not be reachable from any refs at all if it's
attached to an object island (a group of objects that are not
reachable by any refs).
If that object island is not complete, a new fetch request may send
more objects to connect it to some ref. At that time, because we
incorrectly installed the shallow commit in this island, the user will
not see anything after that commit (fsck is still ok). This is not
desired.
Given that object islands are rare (C Git never sends such islands for
security reasons) and do not really harm the repository integrity, a
tradeoff is made to surprise the user occasionally but work faster
everyday.
A new option --strict could be added later that follows exactly the 8
steps. "git prune" can also learn to remove dangling objects _and_ the
shallow commits that are attached to them from .git/shallow.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
No callers pass a non-empty pointer as shallow_points at this
stage. As a result, all clients still refuse to talk to shallow
repository on the other end.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The latter can do everything the former can and is used in many more
places.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Make the function is_local() in transport.c public, rename it into
url_is_local_not_ssh() and use it in both transport.c and connect.c
Use a protocol "local" for URLs for the local file system.
One note about using file:// under Windows:
The (absolute) path on Unix like system typically starts with "/".
When the host is empty, it can be omitted, so that a shell scriptlet
url=file://$pwd
will give a URL like "file:///home/user/repo".
Windows does not have the same concept of a root directory located in "/".
When parsing the URL allow "file://C:/user/repo"
(even if RFC1738 indicates that "file:///C:/user/repo" should be used).
Signed-off-by: Torsten Bögershausen <tboegi@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Leaving only the function definitions and declarations so that any
new topic in flight can still make use of the old functions, replace
existing uses of the prefixcmp() and suffixcmp() with new API
functions.
The change can be recreated by mechanically applying this:
$ git grep -l -e prefixcmp -e suffixcmp -- \*.c |
grep -v strbuf\\.c |
xargs perl -pi -e '
s|!prefixcmp\(|starts_with\(|g;
s|prefixcmp\(|!starts_with\(|g;
s|!suffixcmp\(|ends_with\(|g;
s|suffixcmp\(|!ends_with\(|g;
'
on the result of preparatory changes in this series.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The auto-tag-following code in "git fetch" tries to reuse the same
transport twice when the serving end does not cooperate and does
not give tags that point to commits that are asked for as part of
the primary transfer. Unfortunately, Git-aware transport helper
interface is not designed to be used more than once, hence this
does not work over smart-http transfer.
* jc/transport-do-not-use-connect-twice-in-fetch:
builtin/fetch.c: Fix a sparse warning
fetch: work around "transport-take-over" hack
fetch: refactor code that fetches leftover tags
fetch: refactor code that prepares a transport
fetch: rename file-scope global "transport" to "gtransport"
t5802: add test for connect helper
A Git-aware "connect" transport allows the "transport_take_over" to
redirect generic transport requests like fetch(), push_refs() and
get_refs_list() to the native Git transport handling methods. The
take-over process replaces transport->data with a fake data that
these method implementations understand.
While this hack works OK for a single request, it breaks when the
transport needs to make more than one requests. transport->data
that used to hold necessary information for the specific helper to
work correctly is destroyed during the take-over process.
One codepath that this matters is "git fetch" in auto-follow mode;
when it does not get all the tags that ought to point at the history
it got (which can be determined by looking at the peeled tags in the
initial advertisement) from the primary transfer, it internally
makes a second request to complete the fetch. Because "take-over"
hack has already destroyed the data necessary to talk to the
transport helper by the time this happens, the second request cannot
make a request to the helper to make another connection to fetch
these additional tags.
Mark such a transport as "cannot_reuse", and use a separate
transport to perform the backfill fetch in order to work around
this breakage.
Note that this problem does not manifest itself when running t5802,
because our upload-pack gives you all the necessary auto-followed
tags during the primary transfer. You would need to step through
"git fetch" in a debugger, stop immediately after the primary
transfer finishes and writes these auto-followed tags, remove the
tag references and repack/prune the repository to convince the
"find-non-local-tags" procedure that the primary transfer failed to
give us all the necessary tags, and then let it continue, in order
to trigger the bug in the secondary transfer this patch fixes.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This teaches the deepest part of the callchain for "git push" (and
"git send-pack") to enforce "the old value of the ref must be this,
otherwise fail this push" (aka "compare-and-swap" / "--lockref").
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This plugs the push_cas_option data collected by the command line
option parser to the transport system with a new function
apply_push_cas(), which is called after match_push_refs() has
already been called.
At this point, we know which remote we are talking to, and what
remote refs we are going to update, so we can fill in the details
that may have been missing from the command line, such as
(1) what abbreviated refname the user gave us matches the actual
refname at the remote; and
(2) which remote-tracking branch in our local repository to read
the value of the object to expect at the remote.
to populate the old_sha1_expect[] field of each of the remote ref.
As stated in the documentation, the use of remote-tracking branch
as the default is a tentative one, and we may come up with a better
logic as we gain experience.
Still nobody uses this information, which is the topic of the next
patch.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The definition of "struct ref" in "cache.h", a header file so
central to the system, always confused me. This structure is not
about the local ref used by sha1-name API to name local objects.
It is what refspecs are expanded into, after finding out what refs
the other side has, to define what refs are updated after object
transfer succeeds to what values. It belongs to "remote.h" together
with "struct refspec".
While we are at it, also move the types and functions related to the
Git transport connection to a new header file connect.h
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Documentation and some comments still refer to files in builtin/
as 'builtin-*.[cho]'. Update these to show the correct location.
Signed-off-by: Phil Hord <hordp@cisco.com>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Assisted-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In order to make sure the cloned repository is good, we run "rev-list
--objects --not --all $new_refs" on the repository. This is expensive
on large repositories. This patch attempts to mitigate the impact in
this special case.
In the "good" clone case, we only have one pack. If all of the
following are met, we can be sure that all objects reachable from the
new refs exist, which is the intention of running "rev-list ...":
- all refs point to an object in the pack
- there are no dangling pointers in any object in the pack
- no objects in the pack point to objects outside the pack
The second and third checks can be done with the help of index-pack as
a slight variation of --strict check (which introduces a new condition
for the shortcut: pack transfer must be used and the number of objects
large enough to call index-pack). The first is checked in
check_everything_connected after we get an "ok" from index-pack.
"index-pack + new checks" is still faster than the current "index-pack
+ rev-list", which is the whole point of this patch. If any of the
conditions fail, we fall back to the good old but expensive "rev-list
..". In that case it's even more expensive because we have to pay for
the new checks in index-pack. But that should only happen when the
other side is either buggy or malicious.
Cloning linux-2.6 over file://
before after
real 3m25.693s 2m53.050s
user 5m2.037s 4m42.396s
sys 0m13.750s 0m16.574s
A more realistic test with ssh:// over wireless
before after
real 11m26.629s 10m4.213s
user 5m43.196s 5m19.444s
sys 0m35.812s 0m37.630s
This shortcut is not applied to shallow clones, partly because shallow
clones should have no more objects than a usual fetch and the cost of
rev-list is acceptable, partly to avoid dealing with corner cases when
grafting is involved.
This shortcut does not apply to unpack-objects code path either
because the number of objects must be small in order to trigger that
code path.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Clean up pkt-line API, implementation and its callers to make them
more robust.
* jk/pkt-line-cleanup:
do not use GIT_TRACE_PACKET=3 in tests
remote-curl: always parse incoming refs
remote-curl: move ref-parsing code up in file
remote-curl: pass buffer straight to get_remote_heads
teach get_remote_heads to read from a memory buffer
pkt-line: share buffer/descriptor reading implementation
pkt-line: provide a LARGE_PACKET_MAX static buffer
pkt-line: move LARGE_PACKET_MAX definition from sideband
pkt-line: teach packet_read_line to chomp newlines
pkt-line: provide a generic reading function with options
pkt-line: drop safe_write function
pkt-line: move a misplaced comment
write_or_die: raise SIGPIPE when we get EPIPE
upload-archive: use argv_array to store client arguments
upload-archive: do not copy repo name
send-pack: prefer prefixcmp over memcmp in receive_status
fetch-pack: fix out-of-bounds buffer offset in get_ack
upload-pack: remove packet debugging harness
upload-pack: do not add duplicate objects to shallow list
upload-pack: use get_sha1_hex to parse "shallow" lines
To a human reader, it is quite obvious that cmp is assigned before
it is used, but gcc 4.6.3 that ships with Ubuntu 12.04 is among
those that do not get this right.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* maint:
diff.c: diff.renamelimit => diff.renameLimit in message
wt-status: fix possible use of uninitialized variable
fast-import: clarify "inline" logic in file_change_m
run-command: always set failed_errno in start_command
transport: drop "int cmp = cmp" hack
drop some obsolete "x = x" compiler warning hacks
fast-import: use pointer-to-pointer to keep list tail
According to 47ec794, this initialization is meant to
squelch an erroneous uninitialized variable warning from gcc
4.0.1. That version is quite old at this point, and gcc 4.1
and up handle it fine, with one exception. There seems to be
a regression in gcc 4.6.3, which produces the warning;
however, gcc versions 4.4.7 and 4.7.2 do not.
Signed-off-by: Jeff King <peff@peff.net>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Allows requests to fetch objects at any tip of refs (including
hidden ones). It seems that there may be use cases even outside
Gerrit (e.g. $gmane/215701).
* jc/fetch-raw-sha1:
fetch: fetch objects by their exact SHA-1 object names
upload-pack: optionally allow fetching from the tips of hidden refs
fetch: use struct ref to represent refs to be fetched
parse_fetch_refspec(): clarify the codeflow a bit
The new option "--follow-tags" tells "git push" to push annotated
tags that are missing from the other side and that can be reached by
the history that is otherwise pushed out.
For example, if you are using the "simple", "current", or "upstream"
push, you would ordinarily push the history leading to the commit at
your current HEAD and nothing else. With this option, you would
also push all annotated tags that can be reached from that commit to
the other side.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Now that we can read packet data from memory as easily as a
descriptor, get_remote_heads can take either one as a
source. This will allow further refactoring in remote-curl.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A failure to push due to non-ff while on an unborn branch
dereferenced a NULL pointer when showing an error message.
* ft/transport-report-segv:
push: fix segfault when HEAD points nowhere
Even though "git fetch" has full infrastructure to parse refspecs to
be fetched and match them against the list of refs to come up with
the final list of refs to be fetched, the list of refs that are
requested to be fetched were internally converted to a plain list of
strings at the transport layer and then passed to the underlying
fetch-pack driver.
Stop this conversion and instead pass around an array of refs.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A failure to push due to non-ff while on an unborn branch
dereferenced a NULL pointer when showing an error message.
* ft/transport-report-segv:
push: fix segfault when HEAD points nowhere
Improve error and advice messages given locally when "git push"
refuses when it cannot compute fast-forwardness by separating these
cases from the normal "not a fast-forward; merge first and push
again" case.
* jc/push-reject-reasons:
push: finishing touches to explain REJECT_ALREADY_EXISTS better
push: introduce REJECT_FETCH_FIRST and REJECT_NEEDS_FORCE
push: further simplify the logic to assign rejection reason
push: further clean up fields of "struct ref"
After a push of a branch other than the current branch fails in
a no-ff error and if you are still on an unborn branch, the code
recently added to report the failure dereferenced a null pointer
while checking the name of the current branch.
Signed-off-by: Fraser Tweedale <frase@frase.id.au>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we push to update an existing ref, if:
* the object at the tip of the remote is not a commit; or
* the object we are pushing is not a commit,
it won't be correct to suggest to fetch, integrate and push again,
as the old and new objects will not "merge". We should explain that
the push must be forced when there is a non-committish object is
involved in such a case.
If we do not have the current object at the tip of the remote, we do
not even know that object, when fetched, is something that can be
merged. In such a case, suggesting to pull first just like
non-fast-forward case may not be technically correct, but in
practice, most such failures are seen when you try to push your work
to a branch without knowing that somebody else already pushed to
update the same branch since you forked, so "pull first" would work
as a suggestion most of the time. And if the object at the tip is
not a commit, "pull first" will fail, without making any permanent
damage. As a side effect, it also makes the error message the user
will get during the next "push" attempt easier to understand, now
the user is aware that a non-commit object is involved.
In these cases, the current code already rejects such a push on the
client end, but we used the same error and advice messages as the
ones used when rejecting a non-fast-forward push, i.e. pull from
there and integrate before pushing again.
Introduce new rejection reasons and reword the messages
appropriately.
[jc: with help by Peff on message details]
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The "nonfastforward" and "update" fields are only used while
deciding what value to assign to the "status" locally in a single
function. Remove them from the "struct ref".
The "requires_force" field is not used to decide if the proposed
update requires a --force option to succeed, or to record such a
decision made elsewhere. It is used by status reporting code that
the particular update was "forced". Rename it to "forced_update",
and move the code to assign to it around to further clarify how it
is used and what it is used for.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add support for a pre-push hook which can be used to determine if the
set of refs to be pushed is suitable for the target repository. The
hook is run with two arguments specifying the name and location of the
destination repository.
Information about what is to be pushed is provided by sending lines of
the following form to the hook's standard input:
<local ref> SP <local sha1> SP <remote ref> SP <remote sha1> LF
If the hook exits with a non-zero status, the push will be aborted.
This will allow the script to determine if the push is acceptable based
on the target repository and branch(es), the commits which are to be
pushed, and even the source branches in some cases.
Signed-off-by: Aaron Schrab <aaron@schrab.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
References are allowed to update from one commit-ish to another if the
former is an ancestor of the latter. This behavior is oriented to
branches which are expected to move with commits. Tag references are
expected to be static in a repository, though, thus an update to
something under refs/tags/ should be rejected unless the update is
forced.
Signed-off-by: Chris Rorvick <chris@rorvick.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a flag for indicating an update to a reference requires force.
Currently the `nonfastforward` flag is used for this when generating the
status message. A separate flag insulates dependent logic from the
details of set_ref_status_for_push().
Signed-off-by: Chris Rorvick <chris@rorvick.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Advising the user to fetch and merge only makes sense if the rejected
reference is a branch. If none of the rejections are for branches, just
tell the user the reference already exists.
Signed-off-by: Chris Rorvick <chris@rorvick.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Pass all rejection reasons back from transport_push(). The logic is
simpler and more flexible with regard to providing useful feedback.
Signed-off-by: Chris Rorvick <chris@rorvick.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>