Commit Graph

289 Commits

Author SHA1 Message Date
René Scharfe
248f66ed8e run-command: use strbuf_addstr() for adding a string to a strbuf
Patch generated with Coccinelle and contrib/coccinelle/strbuf.cocci.

Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-25 09:49:15 -07:00
Nguyễn Thái Ngọc Duy
090a09272a run-command.c: print new cwd in trace_run_command()
If a command sets a new env variable GIT_DIR=.git, we need more context
to know where that '.git' is related to.

Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-01-19 10:49:20 -08:00
Nguyễn Thái Ngọc Duy
c61a975df1 run-command.c: print env vars in trace_run_command()
Occasionally submodule code could execute new commands with GIT_DIR set
to some submodule. GIT_TRACE prints just the command line which makes it
hard to tell that it's not really executed on this repository.

Print the env delta (compared to parent environment) in this case.

Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-01-19 10:49:20 -08:00
Nguyễn Thái Ngọc Duy
21dfc5e08f run-command.c: print program 'git' when tracing git_cmd mode
We normally print full command line, including the program and its
argument. When git_cmd is set, we have a special code path to run the
right "git" program and child_process.argv[0] will not contain the
program name anymore. As a result, we print just the command
arguments.

I thought it was a regression when the code was refactored and git_cmd
added, but apparently it's not. git_cmd mode was introduced before
tracing was added in 8852f5d704 (run_command(): respect GIT_TRACE -
2008-07-07) so it's more like an oversight in 8852f5d704.

Fix it, print the program name "git" in git_cmd mode. It's nice to have
now. But it will be more important later when we start to print env
variables too, in shell syntax. The lack of a program name would look
confusing then.

Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-01-19 10:49:20 -08:00
Nguyễn Thái Ngọc Duy
e73dd78699 run-command.c: introduce trace_run_command()
This is the same as the old code that uses trace_argv_printf() in
run-command.c. This function will be improved in later patches to
print more information from struct child_process.

Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-01-19 10:49:20 -08:00
Damien Marié
f805a00a39 run-command: add hint when a hook is ignored
When an hook is present but the file is not set as executable then git will
ignore the hook.
For now this is silent which can be confusing.

This commit adds this warning to improve the situation:

  hint: The 'pre-commit' hook was ignored because it's not set as executable.
  hint: You can disable this warning with `git config advice.ignoredHook false`

To allow the old use-case of enabling/disabling hooks via the executable flag a
new setting is introduced: advice.ignoredHook.

Signed-off-by: Damien Marié <damien@dam.io>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-10 13:21:46 +09:00
René Scharfe
0e187d758c run-command: use ALLOC_ARRAY
Use the macro ALLOC_ARRAY to allocate an array.  This is shorter and
easier, as it automatically infers the size of elements.

Patch generated with Coccinelle and contrib/coccinelle/array.cocci.

Signeg-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-03 08:42:57 +09:00
Junio C Hamano
0869277033 Merge branch 'js/run-process-parallel-api-fix' into maint
API fix.

* js/run-process-parallel-api-fix:
  run_processes_parallel: change confusing task_cb convention
2017-08-23 14:33:49 -07:00
Johannes Schindelin
c1e860f1dc run_processes_parallel: change confusing task_cb convention
By declaring the task_cb parameter of type `void **`, the signature of
the get_next_task method suggests that the "task-specific cookie" can be
defined in that method, and the signatures of the start_failure and of
the task_finished methods declare that parameter of type `void *`,
suggesting that those methods are mere users of said cookie.

That convention makes a total lot of sense, because the tasks are pretty
much dead when one of the latter two methods is called: there would be
little use to reset that cookie at that point because nobody would be
able to see the change afterwards.

However, this is not what the code actually does. For all three methods,
it passes the *address* of pp->children[i].data.

As reasoned above, this behavior makes no sense. So let's change the
implementation to adhere to the convention suggested by the signatures.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Acked-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-07-21 11:58:46 -07:00
Brandon Williams
940283101c run-command: restrict PATH search to executable files
In some situations run-command will incorrectly try (and fail) to
execute a directory instead of an executable file.  This was observed by
having a directory called "ssh" in $PATH before the real ssh and trying
to use ssh protoccol, reslting in the following:

	$ git ls-remote ssh://url
	fatal: cannot exec 'ssh': Permission denied

It ends up being worse and run-command will even try to execute a
non-executable file if it preceeds the executable version of a file on
the PATH.  For example, if PATH=~/bin1:~/bin2:~/bin3 and there exists a
directory 'git-hello' in 'bin1', a non-executable file 'git-hello' in
bin2 and an executable file 'git-hello' (which prints "Hello World!") in
bin3 the following will occur:

	$ git hello
	fatal: cannot exec 'git-hello': Permission denied

This is due to only checking 'access()' when locating an executable in
PATH, which doesn't distinguish between files and directories.  Instead
use 'is_executable()' which check that the path is to a regular,
executable file.  Now run-command won't try to execute the directory or
non-executable file 'git-hello':

	$ git hello
	Hello World!

which matches what execvp(3) would have done when asked to execute
git-hello with such a $PATH.

Reported-by: Brian Hatfield <bhatfield@google.com>
Signed-off-by: Brandon Williams <bmwill@google.com>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-25 23:17:36 -07:00
Brandon Williams
38124a40e4 run-command: expose is_executable function
Move the logic for 'is_executable()' from help.c to run_command.c and
expose it so that callers from outside help.c can access the function.
This is to enable run-command to be able to query if a file is
executable in a future patch.

Signed-off-by: Brandon Williams <bmwill@google.com>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-25 18:45:29 -07:00
Eric Wong
45afb1ca9c run-command: block signals between fork and execve
Signal handlers of the parent firing in the forked child may
have unintended side effects.  Rather than auditing every signal
handler we have and will ever have, block signals while forking
and restore default signal handlers in the child before execve.

Restoring default signal handlers is required because
execve does not unblock signals, it only restores default
signal handlers.  So we must restore them with sigprocmask
before execve, leaving a window when signal handlers
we control can fire in the child.  Continue ignoring
ignored signals, but reset the rest to defaults.

Similarly, disable pthread cancellation to future-proof our code
in case we start using cancellation; as cancellation is
implemented with signals in glibc.

Signed-off-by: Eric Wong <e@80x24.org>
Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-20 17:55:32 -07:00
Brandon Williams
e503cd6ed3 run-command: add note about forking and threading
All non-Async-Signal-Safe functions (e.g. malloc and die) were removed
between 'fork' and 'exec' in start_command in order to avoid potential
deadlocking when forking while multiple threads are running.  This
deadlocking is possible when a thread (other than the one forking) has
acquired a lock and didn't get around to releasing it before the fork.
This leaves the lock in a locked state in the resulting process with no
hope of it ever being released.

Add a note describing this potential pitfall before the call to 'fork()'
so people working in this section of the code know to only use
Async-Signal-Safe functions in the child process.

Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-20 17:55:32 -07:00
Brandon Williams
53fa6753b3 run-command: handle dup2 and close errors in child
Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-20 17:55:32 -07:00
Brandon Williams
79319b1949 run-command: eliminate calls to error handling functions in child
All of our standard error handling paths have the potential to
call malloc or take stdio locks; so we must avoid them inside
the forked child.

Instead, the child only writes an 8 byte struct atomically to
the parent through the notification pipe to propagate an error.
All user-visible error reporting happens from the parent;
even avoiding functions like atexit(3) and exit(3).

Helped-by: Eric Wong <e@80x24.org>
Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-20 17:55:32 -07:00
Brandon Williams
db015a284e run-command: don't die in child when duping /dev/null
Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-20 17:55:32 -07:00
Brandon Williams
ae25394b4c run-command: prepare child environment before forking
In order to avoid allocation between 'fork()' and 'exec()' prepare the
environment to be used in the child process prior to forking.

Switch to using 'execve()' so that the construct child environment can
used in the exec'd process.

Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-20 17:55:32 -07:00
Brandon Williams
e3a434468f run-command: use the async-signal-safe execv instead of execvp
Convert the function used to exec from 'execvp()' to 'execv()' as the (p)
variant of exec isn't async-signal-safe and has the potential to call malloc
during the path resolution it performs.  Instead we simply do the path
resolution ourselves during the preparation stage prior to forking.  There also
don't exist any portable (p) variants which also take in an environment to use
in the exec'd process.  This allows easy migration to using 'execve()' in a
future patch.

Also, as noted in [1], in the event of an ENOEXEC the (p) variants of
exec will attempt to execute the command by interpreting it with the
'sh' utility.  To maintain this functionality, if 'execv()' fails with
ENOEXEC, start_command will atempt to execute the command by
interpreting it with 'sh'.

[1] http://pubs.opengroup.org/onlinepubs/009695399/functions/exec.html

Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-20 17:55:32 -07:00
Brandon Williams
3967e25be1 run-command: prepare command before forking
According to [1] we need to only call async-signal-safe operations between fork
and exec.  Using malloc to build the argv array isn't async-signal-safe.

In order to avoid allocation between 'fork()' and 'exec()' prepare the
argv array used in the exec call prior to forking the process.

[1] http://pubs.opengroup.org/onlinepubs/009695399/functions/fork.html

Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-20 17:55:32 -07:00
Junio C Hamano
6756b58ebc Merge branch 'jk/execv-dashed-external'
Fix for NO_PTHREADS build.

* jk/execv-dashed-external:
  run-command: fix segfault when cleaning forked async process
2017-03-24 13:07:34 -07:00
Jeff King
7b91929ba0 run-command: fix segfault when cleaning forked async process
Callers of the run-command API may mark a child as
"clean_on_exit"; it gets added to a list and killed when the
main process dies.  Since commit 46df6906f
(execv_dashed_external: wait for child on signal death,
2017-01-06), we respect an extra "wait_after_clean" flag,
which we expect to find in the child_process struct.

When Git is built with NO_PTHREADS, we start "struct
async" processes by forking rather than spawning a thread.
The resulting processes get added to the cleanup list but
they don't have a child_process struct, and the cleanup
function ends up dereferencing NULL.

We should notice this case and assume that the processes do
not need to be waited for (i.e., the same behavior they had
before 46df6906f).

Reported-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Jeff King <peff@peff.net>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-18 10:29:15 -07:00
Junio C Hamano
cddbda4bc8 Merge branch 'js/mingw-hooks-with-exe-suffix'
Names of the various hook scripts must be spelled exactly, but on
Windows, an .exe binary must be named with .exe suffix; notice
$GIT_DIR/hooks/<hookname>.exe as a valid <hookname> hook.

* js/mingw-hooks-with-exe-suffix:
  mingw: allow hooks to be .exe files
2017-02-02 13:36:57 -08:00
Johannes Schindelin
235be51fbe mingw: allow hooks to be .exe files
Executable files in Windows need to have the extension '.exe', otherwise
they do not work. Extend the hooks to not just look at the hard coded
names, but also at the names extended by the custom STRIP_EXTENSION,
which is defined as '.exe' in Windows.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-01-30 08:49:43 -08:00
Jeff King
46df6906f3 execv_dashed_external: wait for child on signal death
When you hit ^C to interrupt a git command going to a pager,
this usually leaves the pager running. But when a dashed
external is in use, the pager ends up in a funny state and
quits (but only after eating one more character from the
terminal!). This fixes it.

Explaining the reason will require a little background.

When git runs a pager, it's important for the git process to
hang around and wait for the pager to finish, even though it
has no more data to feed it. This is because git spawns the
pager as a child, and thus the git process is the session
leader on the terminal. After it dies, the pager will finish
its current read from the terminal (eating the one
character), and then get EIO trying to read again.

When you hit ^C, that sends SIGINT to git and to the pager,
and it's a similar situation.  The pager ignores it, but the
git process needs to hang around until the pager is done. We
addressed that long ago in a3da882120 (pager: do
wait_for_pager on signal death, 2009-01-22).

But when you have a dashed external (or an alias pointing to
a builtin, which will re-exec git for the builtin), there's
an extra process in the mix. For instance, running:

  $ git -c alias.l=log l

will end up with a process tree like:

  git (parent)
    \
     git-log (child)
      \
       less (pager)

If you hit ^C, SIGINT goes to all of them. The pager ignores
it, and the child git process will end up in wait_for_pager().
But the parent git process will die, and the usual EIO
trouble happens.

So we really want the parent git process to wait_for_pager(),
but of course it doesn't know anything about the pager at
all, since it was started by the child.  However, we can
have it wait on the git-log child, which in turn is waiting
on the pager. And that's what this patch does.

There are a few design decisions here worth explaining:

  1. The new feature is attached to run-command's
     clean_on_exit feature. Partly this is convenience,
     since that feature already has a signal handler that
     deals with child cleanup.

     But it's also a meaningful connection. The main reason
     that dashed externals use clean_on_exit is to bind the
     two processes together. If somebody kills the parent
     with a signal, we propagate that to the child (in this
     instance with SIGINT, we do propagate but it doesn't
     matter because the original signal went to the whole
     process group). Likewise, we do not want the parent
     to go away until the child has done so.

     In a traditional Unix world, we'd probably accomplish
     this binding by just having the parent execve() the
     child directly. But since that doesn't work on Windows,
     everything goes through run_command's more spawn-like
     interface.

  2. We do _not_ automatically waitpid() on any
     clean_on_exit children. For dashed externals this makes
     sense; we know that the parent is doing nothing but
     waiting for the child to exit anyway. But with other
     children, it's possible that the child, after getting
     the signal, could be waiting on the parent to do
     something (like closing a descriptor). If we were to
     wait on such a child, we'd end up in a deadlock. So
     this errs on the side of caution, and lets callers
     enable the feature explicitly.

  3. When we send children the cleanup signal, we send all
     the signals first, before waiting on any children. This
     is to avoid the case where one child might be waiting
     on another one to exit, causing a deadlock. We inform
     all of them that it's time to die before reaping any.

     In practice, there is only ever one dashed external run
     from a given process, so this doesn't matter much now.
     But it future-proofs us if other callers start using
     the wait_after_clean mechanism.

There's no automated test here, because it would end up racy
and unportable. But it's easy to reproduce the situation by
running the log command given above and hitting ^C.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-01-09 13:41:40 -08:00
Lars Schneider
ac2fbaa674 run-command: add clean_on_exit_handler
Some processes might want to perform cleanup tasks before Git kills them
due to the 'clean_on_exit' flag. Let's give them an interface for doing
this. The feature is used in a subsequent patch.

Please note, that the cleanup callback is not executed if Git dies of a
signal. The reason is that only "async-signal-safe" functions would be
allowed to be call in that case. Since we cannot control what functions
the callback will use, we will not support the case. See 507d7804 for
more details.

Helped-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Lars Schneider <larsxschneider@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-10-17 11:36:50 -07:00
Lars Schneider
b992fe104e run-command: move check_pipe() from write_or_die to run_command
Move check_pipe() to run_command and make it public. This is necessary
to call the function from pkt-line in a subsequent patch.

While at it, make async_exit() static to run_command.c as it is no
longer used from outside.

Signed-off-by: Lars Schneider <larsxschneider@gmail.com>
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-10-17 11:36:49 -07:00
Junio C Hamano
d05d0e9966 Merge branch 'ab/hooks'
"git rev-parse --git-path hooks/<hook>" learned to take
core.hooksPath configuration variable (introduced during 2.9 cycle)
into account.

* ab/hooks:
  rev-parse: respect core.hooksPath in --git-path
2016-08-19 15:34:16 -07:00
Johannes Schindelin
9445b4921e rev-parse: respect core.hooksPath in --git-path
The idea of the --git-path option is not only to avoid having to
prefix paths with the output of --git-dir all the time, but also to
respect overrides for specific common paths inside the .git directory
(e.g. `git rev-parse --git-path objects` will report the value of the
environment variable GIT_OBJECT_DIRECTORY, if set).

When introducing the core.hooksPath setting, we forgot to adjust
git_path() accordingly. This patch fixes that.

While at it, revert the special-casing of core.hooksPath in
run-command.c, as it is now no longer needed.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-08-16 12:03:26 -07:00
Jeff King
96335bcf4d run-command: add pipe_command helper
We already have capture_command(), which captures the stdout
of a command in a way that avoids deadlocks. But sometimes
we need to do more I/O, like capturing stderr as well, or
sending data to stdin. It's easy to write code that
deadlocks racily in these situations depending on how fast
the command reads its input, or in which order it writes its
output.

Let's give callers an easy interface for doing this the
right way, similar to what capture_command() did for the
simple case.

The whole thing is backed by a generic poll() loop that can
feed an arbitrary number of buffers to descriptors, and fill
an arbitrary number of strbufs from other descriptors. This
seems like overkill, but the resulting code is actually a
bit cleaner than just handling the three descriptors
(because the output code for stdout/stderr is effectively
duplicated, so being able to loop is a benefit).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-06-17 17:03:56 -07:00
Junio C Hamano
40cfc95856 Merge branch 'nd/error-errno'
The code for warning_errno/die_errno has been refactored and a new
error_errno() reporting helper is introduced.

* nd/error-errno: (41 commits)
  wrapper.c: use warning_errno()
  vcs-svn: use error_errno()
  upload-pack.c: use error_errno()
  unpack-trees.c: use error_errno()
  transport-helper.c: use error_errno()
  sha1_file.c: use {error,die,warning}_errno()
  server-info.c: use error_errno()
  sequencer.c: use error_errno()
  run-command.c: use error_errno()
  rerere.c: use error_errno() and warning_errno()
  reachable.c: use error_errno()
  mailmap.c: use error_errno()
  ident.c: use warning_errno()
  http.c: use error_errno() and warning_errno()
  grep.c: use error_errno()
  gpg-interface.c: use error_errno()
  fast-import.c: use error_errno()
  entry.c: use error_errno()
  editor.c: use error_errno()
  diff-no-index.c: use error_errno()
  ...
2016-05-17 14:38:28 -07:00
Junio C Hamano
6675f501f6 Merge branch 'ab/hooks'
A new configuration variable core.hooksPath allows customizing
where the hook directory is.

* ab/hooks:
  hooks: allow customizing where the hook directory is
  githooks.txt: minor improvements to the grammar & phrasing
  githooks.txt: amend dangerous advice about 'update' hook ACL
  githooks.txt: improve the intro section
2016-05-17 14:38:17 -07:00
Nguyễn Thái Ngọc Duy
fbcb0e0659 run-command.c: use error_errno()
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-05-09 12:29:08 -07:00
Ævar Arnfjörð Bjarmason
867ad08a26 hooks: allow customizing where the hook directory is
Change the hardcoded lookup for .git/hooks/* to optionally lookup in
$(git config core.hooksPath)/* instead.

This is essentially a more intrusive version of the git-init ability to
specify hooks on init time via init templates.

The difference between that facility and this feature is that this can
be set up after the fact via e.g. ~/.gitconfig or /etc/gitconfig to
apply for all your personal repositories, or all repositories on the
system.

I plan on using this on a centralized Git server where users can create
arbitrary repositories under /gitroot, but I'd like to manage all the
hooks that should be run centrally via a unified dispatch mechanism.

Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-05-04 16:25:13 -07:00
Junio C Hamano
d689301043 Merge branch 'jk/push-client-deadlock-fix'
"git push" from a corrupt repository that attempts to push a large
number of refs deadlocked; the thread to relay rejection notices
for these ref updates blocked on writing them to the main thread,
after the main thread at the receiving end notices that the push
failed and decides not to read these notices and return a failure.

* jk/push-client-deadlock-fix:
  t5504: drop sigpipe=ok from push tests
  fetch-pack: isolate sigpipe in demuxer thread
  send-pack: isolate sigpipe in demuxer thread
  run-command: teach async threads to ignore SIGPIPE
  send-pack: close demux pipe before finishing async process
2016-04-29 12:59:08 -07:00
Jeff King
c792d7b6ce run-command: teach async threads to ignore SIGPIPE
Async processes can be implemented as separate forked
processes, or as threads (depending on the NO_PTHREADS
setting). In the latter case, if an async thread gets
SIGPIPE, it takes down the whole process. This is obviously
bad if the main process was not otherwise going to die, but
even if we were going to die, it means the main process does
not have a chance to report a useful error message.

There's also the small matter that forked async processes
will not take the main process down on a signal, meaning git
will behave differently depending on the NO_PTHREADS
setting.

This patch fixes it by adding a new flag to "struct async"
to block SIGPIPE just in the async thread. In theory, this
should always be on (which makes async threads behave more
like async processes), but we would first want to make sure
that each async process we spawn is careful about checking
return codes from write() and would not spew endlessly into
a dead pipe. So let's start with it as optional, and we can
enable it for specific sites in future patches.

The natural name for this option would be "ignore_sigpipe",
since that's what it does for the threaded case. But since
that name might imply that we are ignoring it in all cases
(including the separate-process one), let's call it
"isolate_sigpipe". What we are really asking for is
isolation. I.e., not to have our main process taken down by
signals spawned by the async process. How that is
implemented is up to the run-command code.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-04-20 13:33:53 -07:00
Junio C Hamano
bdebbeb334 Merge branch 'sb/submodule-parallel-update'
A major part of "git submodule update" has been ported to C to take
advantage of the recently added framework to run download tasks in
parallel.

* sb/submodule-parallel-update:
  clone: allow an explicit argument for parallel submodule clones
  submodule update: expose parallelism to the user
  submodule helper: remove double 'fatal: ' prefix
  git submodule update: have a dedicated helper for cloning
  run_processes_parallel: rename parameters for the callbacks
  run_processes_parallel: treat output of children as byte array
  submodule update: direct error message to stderr
  fetching submodules: respect `submodule.fetchJobs` config option
  submodule-config: drop check against NULL
  submodule-config: keep update strategy around
2016-04-06 11:39:01 -07:00
Junio C Hamano
b7a6ec609f Merge branch 'jk/tighten-alloc' into maint
* jk/tighten-alloc: (23 commits)
  compat/mingw: brown paper bag fix for 50a6c8e
  ewah: convert to REALLOC_ARRAY, etc
  convert ewah/bitmap code to use xmalloc
  diff_populate_gitlink: use a strbuf
  transport_anonymize_url: use xstrfmt
  git-compat-util: drop mempcpy compat code
  sequencer: simplify memory allocation of get_message
  test-path-utils: fix normalize_path_copy output buffer size
  fetch-pack: simplify add_sought_entry
  fast-import: simplify allocation in start_packfile
  write_untracked_extension: use FLEX_ALLOC helper
  prepare_{git,shell}_cmd: use argv_array
  use st_add and st_mult for allocation size computation
  convert trivial cases to FLEX_ARRAY macros
  use xmallocz to avoid size arithmetic
  convert trivial cases to ALLOC_ARRAY
  convert manual allocations to argv_array
  argv-array: add detach function
  add helpers for allocating flex-array structs
  harden REALLOC_ARRAY and xcalloc against size_t overflow
  ...
2016-03-10 11:13:43 -08:00
Junio C Hamano
bbe90e7950 Merge branch 'sb/submodule-parallel-fetch'
Simplify the two callback functions that are triggered when the
child process terminates to avoid misuse of the child-process
structure that has already been cleaned up.

* sb/submodule-parallel-fetch:
  run-command: do not pass child process data into callbacks
2016-03-04 13:46:30 -08:00
Stefan Beller
aa71049485 run_processes_parallel: rename parameters for the callbacks
The refs code has a similar pattern of passing around 'struct strbuf *err',
which is strictly used for error reporting. This is not the case here,
as the strbuf is used to accumulate all the output (whether it is error
or not) for the user. Rename it to 'out'.

Suggested-by: Jonathan Nieder <jrnieder@gmail.com>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-03-01 11:57:19 -08:00
Stefan Beller
2dac9b5637 run_processes_parallel: treat output of children as byte array
We do not want the output to be interrupted by a NUL byte, so we
cannot use raw fputs. Introduce strbuf_write to avoid having long
arguments in run-command.c.

Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-03-01 11:57:19 -08:00
Stefan Beller
2a73b3dad0 run-command: do not pass child process data into callbacks
The expected way to pass data into the callback is to pass them via
the customizable callback pointer. The error reporting in
default_{start_failure, task_finished} is not user friendly enough, that
we want to encourage using the child data for such purposes.

Furthermore the struct child data is cleaned by the run-command API,
before we access them in the callbacks, leading to use-after-free
situations.

Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-03-01 09:42:01 -08:00
Junio C Hamano
8ef250c559 Merge branch 'jk/epipe-in-async'
Handling of errors while writing into our internal asynchronous
process has been made more robust, which reduces flakiness in our
tests.

* jk/epipe-in-async:
  t5504: handle expected output from SIGPIPE death
  test_must_fail: report number of unexpected signal
  fetch-pack: ignore SIGPIPE in sideband demuxer
  write_or_die: handle EPIPE in async threads
2016-02-26 13:37:26 -08:00
Junio C Hamano
11529ecec9 Merge branch 'jk/tighten-alloc'
Update various codepaths to avoid manually-counted malloc().

* jk/tighten-alloc: (22 commits)
  ewah: convert to REALLOC_ARRAY, etc
  convert ewah/bitmap code to use xmalloc
  diff_populate_gitlink: use a strbuf
  transport_anonymize_url: use xstrfmt
  git-compat-util: drop mempcpy compat code
  sequencer: simplify memory allocation of get_message
  test-path-utils: fix normalize_path_copy output buffer size
  fetch-pack: simplify add_sought_entry
  fast-import: simplify allocation in start_packfile
  write_untracked_extension: use FLEX_ALLOC helper
  prepare_{git,shell}_cmd: use argv_array
  use st_add and st_mult for allocation size computation
  convert trivial cases to FLEX_ARRAY macros
  use xmallocz to avoid size arithmetic
  convert trivial cases to ALLOC_ARRAY
  convert manual allocations to argv_array
  argv-array: add detach function
  add helpers for allocating flex-array structs
  harden REALLOC_ARRAY and xcalloc against size_t overflow
  tree-diff: catch integer overflow in combine_diff_path allocation
  ...
2016-02-26 13:37:16 -08:00
Jeff King
9658846ce3 write_or_die: handle EPIPE in async threads
When write_or_die() sees EPIPE, it treats it specially by
converting it into a SIGPIPE death. We obviously cannot
ignore it, as the write has failed and the caller expects us
to die. But likewise, we cannot just call die(), because
printing any message at all would be a nuisance during
normal operations.

However, this is a problem if write_or_die() is called from
a thread. Our raised signal ends up killing the whole
process, when logically we just need to kill the thread
(after all, if we are ignoring SIGPIPE, there is good reason
to think that the main thread is expecting to handle it).

Inside an async thread, the die() code already does the
right thing, because we use our custom die_async() routine,
which calls pthread_join(). So ideally we would piggy-back
on that, and simply call:

  die_quietly_with_code(141);

or similar. But refactoring the die code to do this is
surprisingly non-trivial. The die_routines themselves handle
both printing and the decision of the exit code. Every one
of them would have to be modified to take new parameters for
the code, and to tell us to be quiet.

Instead, we can just teach write_or_die() to check for the
async case and handle it specially. We do have to build an
interface to abstract the async exit, but it's simple and
self-contained. If we had many call-sites that wanted to do
this die_quietly_with_code(), this approach wouldn't scale
as well, but we don't. This is the only place where do this
weird exit trick.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-25 13:51:45 -08:00
Jeff King
20574f551b prepare_{git,shell}_cmd: use argv_array
These functions transform an existing argv into one suitable
for exec-ing or spawning via git or a shell. We can use an
argv_array in each to avoid dealing with manual counting and
allocation.

This also makes the memory allocation more clear and fixes
some leaks. In prepare_shell_cmd, we would sometimes
allocate a new string with "$@" in it and sometimes not,
meaning the caller could not correctly free it. On the
non-Windows side, we are in a child process which will
exec() or exit() immediately, so the leak isn't a big deal.
On Windows, though, we use spawn() from the parent process,
and leak a string for each shell command we run. On top of
that, the Windows code did not free the allocated argv array
at all (but does for the prepare_git_cmd case!).

By switching both of these functions to write into an
argv_array, we can consistently free the result as
appropriate.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-22 14:51:09 -08:00
Junio C Hamano
5135d1c3d2 Merge branch 'nd/clear-gitenv-upon-use-of-alias'
d95138e6 (setup: set env $GIT_WORK_TREE when work tree is set, like
$GIT_DIR, 2015-06-26) attempted to work around a glitch in alias
handling by overwriting GIT_WORK_TREE environment variable to
affect subprocesses when set_git_work_tree() gets called, which
resulted in a rather unpleasant regression to "clone" and "init".
Try to address the same issue by always restoring the environment
and respawning the real underlying command when handling alias.

* nd/clear-gitenv-upon-use-of-alias:
  run-command: don't warn on SIGPIPE deaths
  git.c: make sure we do not leak GIT_* to alias scripts
  setup.c: re-fix d95138e (setup: set env $GIT_WORK_TREE when ..
  git.c: make it clear save_env() is for alias handling only
2016-01-20 11:43:26 -08:00
Jeff King
ac78663b0d run-command: don't warn on SIGPIPE deaths
When git executes a sub-command, we print a warning if the
command dies due to a signal, but make an exception for
"uninteresting" cases like SIGINT and SIGQUIT (since the
user presumably just hit ^C).

We should make a similar exception for SIGPIPE, because it's
an expected and uninteresting return in most cases; it
generally means the user quit the pager before git had
finished generating all output.  This used to be very hard
to trigger in practice, because:

  1. We only complain if we see a real SIGPIPE death, not
     the shell-induced 141 exit code. This means that
     anything we run via the shell does not trigger the
     warning, which includes most non-trivial aliases.

  2. The common case for SIGPIPE is the user quitting the
     pager before git has finished generating all output.
     But if the user triggers a pager with "-p", we redirect
     the git wrapper's stderr to that pager, too.  Since the
     pager is dead, it means that the message goes nowhere.

  3. You can see it if you run your own pager, like
     "git foo | head". But that only happens if "foo" is a
     non-builtin (so it doesn't work with "log", for
     example).

However, it may become more common after 86d26f2, which
teaches alias to re-exec builtins rather than running them
in the same process. This case doesn't trigger (1), as we
don't need a shell to run a git command. It doesn't trigger
(2), because the pager is not started by the original git,
but by the inner re-exec of git. And it doesn't trigger (3),
because builtins are treated more like non-builtins in this
case.

Given how flaky this message already is (e.g., you cannot
even know whether you will see it, as git optimizes out some
shell invocations behind the scenes based on the contents of
the command!), and that it is unlikely to ever provide
useful information, let's suppress it for all cases of
SIGPIPE.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-12-29 11:05:11 -08:00
Stefan Beller
c553c72eed run-command: add an asynchronous parallel child processor
This allows to run external commands in parallel with ordered output
on stderr.

If we run external commands in parallel we cannot pipe the output directly
to the our stdout/err as it would mix up. So each process's output will
flow through a pipe, which we buffer. One subprocess can be directly
piped to out stdout/err for a low latency feedback to the user.

Example:
Let's assume we have 5 submodules A,B,C,D,E and each fetch takes a
different amount of time as the different submodules vary in size, then
the output of fetches in sequential order might look like this:

 time -->
 output: |---A---| |-B-| |-------C-------| |-D-| |-E-|

When we schedule these submodules into maximal two parallel processes,
a schedule and sample output over time may look like this:

process 1: |---A---| |-D-| |-E-|

process 2: |-B-| |-------C-------|

output:    |---A---|B|---C-------|DE

So A will be perceived as it would run normally in the single child
version. As B has finished by the time A is done, we can dump its whole
progress buffer on stderr, such that it looks like it finished in no
time. Once that is done, C is determined to be the visible child and
its progress will be reported in real time.

So this way of output is really good for human consumption, as it only
changes the timing, not the actual output.

For machine consumption the output needs to be prepared in the tasks,
by either having a prefix per line or per block to indicate whose tasks
output is displayed, because the output order may not follow the
original sequential ordering:

 |----A----| |--B--| |-C-|

will be scheduled to be all parallel:

process 1: |----A----|
process 2: |--B--|
process 3: |-C-|
output:    |----A----|CB

This happens because C finished before B did, so it will be queued for
output before B.

To detect when a child has finished executing, we check interleaved
with other actions (such as checking the liveliness of children or
starting new processes) whether the stderr pipe still exists. Once a
child closed its stderr stream, we assume it is terminating very soon,
and use `finish_command()` from the single external process execution
interface to collect the exit status.

By maintaining the strong assumption of stderr being open until the
very end of a child process, we can avoid other hassle such as an
implementation using `waitpid(-1)`, which is not implemented in Windows.

Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-12-16 12:06:08 -08:00
Junio C Hamano
c3c592ef95 Merge branch 'rs/daemon-plug-child-leak'
"git daemon" uses "run_command()" without "finish_command()", so it
needs to release resources itself, which it forgot to do.

* rs/daemon-plug-child-leak:
  daemon: plug memory leak
  run-command: factor out child_process_clear()
2015-11-03 15:13:12 -08:00
René Scharfe
2d71608ec0 run-command: factor out child_process_clear()
Avoid duplication by moving the code to release allocated memory for
arguments and environment to its own function, child_process_clear().
Export it to provide a counterpart to child_process_init().

Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-11-02 15:01:00 -08:00
Junio C Hamano
2b72dbbcf3 Merge branch 'ti/glibc-stdio-mutex-from-signal-handler'
Allocation related functions and stdio are unsafe things to call
inside a signal handler, and indeed killing the pager can cause
glibc to deadlock waiting on allocation mutex as our signal handler
tries to free() some data structures in wait_for_pager().  Reduce
these unsafe calls.

* ti/glibc-stdio-mutex-from-signal-handler:
  pager: don't use unsafe functions in signal handlers
2015-10-07 13:38:16 -07:00
Junio C Hamano
88bad58d38 Merge branch 'jk/async-pkt-line'
The debugging infrastructure for pkt-line based communication has
been improved to mark the side-band communication specifically.

* jk/async-pkt-line:
  pkt-line: show packets in async processes as "sideband"
  run-command: provide in_async query function
2015-10-05 12:30:09 -07:00
Takashi Iwai
507d7804c0 pager: don't use unsafe functions in signal handlers
Since the commit a3da882120 (pager: do wait_for_pager on signal
death), we call wait_for_pager() in the pager's signal handler.  The
recent bug report revealed that this causes a deadlock in glibc at
aborting "git log" [*1*].  When this happens, git process is left
unterminated, and it can't be killed by SIGTERM but only by SIGKILL.

The problem is that wait_for_pager() function does more than waiting
for pager process's termination, but it does cleanups and printing
errors.  Unfortunately, the functions that may be used in a signal
handler are very limited [*2*].  Particularly, malloc(), free() and the
variants can't be used in a signal handler because they take a mutex
internally in glibc.  This was the cause of the deadlock above.  Other
than the direct calls of malloc/free, many functions calling
malloc/free can't be used.  strerror() is such one, either.

Also the usage of fflush() and printf() in a signal handler is bad,
although it seems working so far.  In a safer side, we should avoid
them, too.

This patch tries to reduce the calls of such functions in signal
handlers.  wait_for_signal() takes a flag and avoids the unsafe
calls.   Also, finish_command_in_signal() is introduced for the
same reason.  There the free() calls are removed, and only waits for
the children without whining at errors.

[*1*] https://bugzilla.opensuse.org/show_bug.cgi?id=942297
[*2*] http://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_04_03

Signed-off-by: Takashi Iwai <tiwai@suse.de>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-09-04 14:57:51 -07:00
Jeff King
661a8cf408 run-command: provide in_async query function
It's not easy for arbitrary code to find out whether it is
running in an async process or not. A top-level function
which is fed to start_async() can know (you just pass down
an argument saying "you are async"). But that function may
call other global functions, and we would not want to have
to pass the information all the way through the call stack.

Nor can we simply set a global variable, as those may be
shared between async threads and the main thread (if the
platform supports pthreads). We need pthread tricks _or_ a
global variable, depending on how start_async is
implemented.

The callers don't have enough information to do this right,
so let's provide a simple query function that does.
Fortunately we can reuse the existing infrastructure to make
the pthread case simple (and even simplify die_async() by
using our new function).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-09-01 15:11:53 -07:00
Junio C Hamano
1302c9f514 Merge branch 'jk/long-error-messages'
The codepath to produce error messages had a hard-coded limit to
the size of the message, primarily to avoid memory allocation while
calling die().

* jk/long-error-messages:
  vreportf: avoid intermediate buffer
  vreportf: report to arbitrary filehandles
2015-08-25 14:57:06 -07:00
Jeff King
3b331e9267 vreportf: report to arbitrary filehandles
The vreportf function always goes to stderr, but run-command
wants child errors to go to the parent's original stderr. To
solve this, commit a5487dd duplicates the stderr fd and
installs die and error handlers to direct the output
appropriately (which later turned into the vwritef
function). This has two downsides, though:

  - we make multiple calls to write(), which contradicts the
    "write at once" logic from d048a96 (print
    warning/error/fatal messages in one shot, 2007-11-09).

  - the custom handlers basically duplicate the normal
    handlers.  They're only a few lines of code, but we
    should not have to repeat the magic "exit(128)", for
    example.

We can solve the first by using fdopen() on the duplicated
descriptor. We can't pass this to vreportf, but we could
introduce a new vreportf_to to handle it.

However, to fix the second problem, we instead introduce a
new "set_error_handle" function, which lets the normal
vreportf calls output to a handle besides stderr. Thus we
can get rid of our custom handlers entirely, and just ask
the regular handlers to output to our new descriptor.

And as vwritef has no more callers, it can just go away.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-08-11 14:24:50 -07:00
Jeff King
03f2c7731b find_hook: keep our own static buffer
The find_hook function returns the results of git_path,
which is a static buffer shared by other path-related calls.
Returning such a buffer is slightly dangerous, because it
can be overwritten by seemingly unrelated functions.

Let's at least keep our _own_ static buffer, so you can
only get in trouble by calling find_hook in quick
succession, which is less likely to happen and more obvious
to notice.

While we're at it, let's add some documentation of the
function's limitations.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-08-10 15:37:13 -07:00
Junio C Hamano
68a2e6a2c8 Merge branch 'nd/multiple-work-trees'
A replacement for contrib/workdir/git-new-workdir that does not
rely on symbolic links and make sharing of objects and refs safer
by making the borrowee and borrowers aware of each other.

* nd/multiple-work-trees: (41 commits)
  prune --worktrees: fix expire vs worktree existence condition
  t1501: fix test with split index
  t2026: fix broken &&-chain
  t2026 needs procondition SANITY
  git-checkout.txt: a note about multiple checkout support for submodules
  checkout: add --ignore-other-wortrees
  checkout: pass whole struct to parse_branchname_arg instead of individual flags
  git-common-dir: make "modules/" per-working-directory directory
  checkout: do not fail if target is an empty directory
  t2025: add a test to make sure grafts is working from a linked checkout
  checkout: don't require a work tree when checking out into a new one
  git_path(): keep "info/sparse-checkout" per work-tree
  count-objects: report unused files in $GIT_DIR/worktrees/...
  gc: support prune --worktrees
  gc: factor out gc.pruneexpire parsing code
  gc: style change -- no SP before closing parenthesis
  checkout: clean up half-prepared directories in --to mode
  checkout: reject if the branch is already checked out elsewhere
  prune: strategies for linked checkouts
  checkout: support checking out into a new working directory
  ...
2015-05-11 14:23:39 -07:00
Junio C Hamano
ea1fd481b4 Merge branch 'jk/run-command-capture'
The run-command interface was easy to abuse and make a pipe for us
to read from the process, wait for the process to finish and then
attempt to read its output, which is a pattern that lead to a
deadlock.  Fix such uses by introducing a helper to do this
correctly (i.e. we need to read first and then wait the process to
finish) and also add code to prevent such abuse in the run-command
helper.

* jk/run-command-capture:
  run-command: forbid using run_command with piped output
  trailer: use capture_command
  submodule: use capture_command
  wt-status: use capture_command
  run-command: introduce capture_command helper
  wt_status: fix signedness mismatch in strbuf_read call
  wt-status: don't flush before running "submodule status"
2015-03-25 12:54:27 -07:00
Jeff King
c29b3962af run-command: forbid using run_command with piped output
Because run_command both spawns and wait()s for the command
before returning control to the caller, any reads from the
pipes we open must necessarily happen after wait() returns.
This can lead to deadlock, as the child process may block
on writing to us while we are blocked waiting for it to
exit.

Worse, it only happens when the child fills the pipe
buffer, which means that the problem may come and go
depending on the platform and the size of the output
produced by the child.

Let's detect and flag this dangerous construct so that we
can catch potential bugs early in the test suite rather than
having them happen in the field.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-03-22 21:39:22 -07:00
Jeff King
911ec99b68 run-command: introduce capture_command helper
Something as simple as reading the stdout from a command
turns out to be rather hard to do right. Doing:

  cmd.out = -1;
  run_command(&cmd);
  strbuf_read(&buf, cmd.out, 0);

can result in deadlock if the child process produces a large
amount of output. What happens is:

  1. The parent spawns the child with its stdout connected
     to a pipe, of which the parent is the sole reader.

  2. The parent calls wait(), blocking until the child exits.

  3. The child writes to stdout. If it writes more data than
     the OS pipe buffer can hold, the write() call will
     block.

This is a deadlock; the parent is waiting for the child to
exit, and the child is waiting for the parent to call
read().

So we might try instead:

  start_command(&cmd);
  strbuf_read(&buf, cmd.out, 0);
  finish_command(&cmd);

But that is not quite right either. We are examining cmd.out
and running finish_command whether start_command succeeded
or not, which is wrong. Moreover, these snippets do not do
any error handling. If our read() fails, we must make sure
to still call finish_command (to reap the child process).
And both snippets failed to close the cmd.out descriptor,
which they must do (provided start_command succeeded).

Let's introduce a run-command helper that can make this a
bit simpler for callers to get right.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-03-22 21:38:31 -07:00
Kyle J. McKay
1b56cdf901 git-compat-util.h: move SHELL_PATH default into header
If SHELL_PATH is not defined we use "/bin/sh".  However,
run-command.c is not the only file that needs to use
the default value so move it into a common header.

Signed-off-by: Kyle J. McKay <mackyle@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-03-10 15:11:24 -07:00
Junio C Hamano
77a801d237 Merge branch 'jc/hook-cleanup'
Remove unused code.

* jc/hook-cleanup:
  run-command.c: retire unused run_hook_with_custom_index()
2014-12-22 12:27:10 -08:00
Nguyễn Thái Ngọc Duy
dcf692625a path.c: make get_pathname() call sites return const char *
Before the previous commit, get_pathname returns an array of PATH_MAX
length. Even if git_path() and similar functions does not use the
whole array, git_path() caller can, in theory.

After the commit, get_pathname() may return a buffer that has just
enough room for the returned string and git_path() caller should never
write beyond that.

Make git_path(), mkpath() and git_path_submodule() return a const
buffer to make sure callers do not write in it at all.

This could have been part of the previous commit, but the "const"
conversion is too much distraction from the core changes in path.c.

Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-12-01 11:00:10 -08:00
Junio C Hamano
814dd8e078 run-command.c: retire unused run_hook_with_custom_index()
This was originally meant to be used to rewrite run_commit_hook()
that only special cases the GIT_INDEX_FILE environment, but the
run_hook_ve() refactoring done earlier made the implementation of
run_commit_hook() thin and clean enough.

Nobody uses this, so retire it as an unfinished clean-up made
unnecessary.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-12-01 08:39:43 -08:00
René Scharfe
6066a7eac4 run-command: use void to declare that functions take no parameters
Explicitly declare that git_atexit_dispatch() and git_atexit_clear()
take no parameters instead of leaving their parameter list empty and
thus unspecified.

Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-11-10 14:43:19 -08:00
Junio C Hamano
e4da4fbe0e Merge branch 'eb/no-pthreads'
Allow us build with NO_PTHREADS=NoThanks compilation option.

* eb/no-pthreads:
  Handle atexit list internaly for unthreaded builds
  pack-objects: set number of threads before checking and warning
  index-pack: fix compilation with NO_PTHREADS
2014-10-24 14:59:10 -07:00
Etienne Buira
0f4b6db3ba Handle atexit list internaly for unthreaded builds
Wrap atexit()s calls on unthreaded builds to handle callback list
internally.

This is needed because on unthreaded builds, asyncs inherits parent's
atexit() list, that gets run as soon as the async exit()s (and again at
the end of async's parent process). That led to remove temporary files
too early.

Also remove a by-atexit-callback guard against this kind of issue in
clone.c, as this patch makes it redundant.

Fixes test 5537 (temporary shallow file vanished before unpack-objects
could open it)

BTW remove an unused variable in shallow.c.

Helped-by: Duy Nguyen <pclouds@gmail.com>
Helped-by: Andreas Schwab <schwab@linux-m68k.org>
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Etienne Buira <etienne.buira@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-19 15:38:30 -07:00
René Scharfe
19a583dc39 run-command: add env_array, an optional argv_array for env
Similar to args, add a struct argv_array member to struct child_process
that simplifies specifying the environment for children.  It is freed
automatically by finish_command() or if start_command() encounters an
error.

Suggested-by: Jeff King <peff@peff.net>
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-19 15:26:31 -07:00
René Scharfe
1f87293d78 run-command: inline prepare_run_command_v_opt()
Merge prepare_run_command_v_opt() and its only caller.  This removes a
pointer indirection and allows to initialize the struct child_process
using CHILD_PROCESS_INIT.

Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-08-20 09:56:12 -07:00
René Scharfe
41e9bad75e run-command: call run_command_v_opt_cd_env() instead of duplicating it
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-08-20 09:55:41 -07:00
René Scharfe
483bbd4e4c run-command: introduce child_process_init()
Add a helper function for initializing those struct child_process
variables for which the macro CHILD_PROCESS_INIT can't be used.

Suggested-by: Jeff King <peff@peff.net>
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-08-20 09:54:58 -07:00
René Scharfe
d318027932 run-command: introduce CHILD_PROCESS_INIT
Most struct child_process variables are cleared using memset first after
declaration.  Provide a macro, CHILD_PROCESS_INIT, that can be used to
initialize them statically instead.  That's shorter, doesn't require a
function call and is slightly more readable (especially given that we
already have STRBUF_INIT, ARGV_ARRAY_INIT etc.).

Helped-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-08-20 09:53:37 -07:00
Junio C Hamano
385e171a5b Merge branch 'sk/mingw-uni-fix-more'
Most of these are battle-tested in msysgit and are needed to
complete what has been merged to 'master' already.

* sk/mingw-uni-fix-more:
  Win32: enable color output in Windows cmd.exe
  Win32: patch Windows environment on startup
  Win32: keep the environment sorted
  Win32: use low-level memory allocation during initialization
  Win32: reduce environment array reallocations
  Win32: don't copy the environment twice when spawning child processes
  Win32: factor out environment block creation
  Win32: unify environment function names
  Win32: unify environment case-sensitivity
  Win32: fix environment memory leaks
  Win32: Unicode environment (incoming)
  Win32: Unicode environment (outgoing)
  Revert "Windows: teach getenv to do a case-sensitive search"
  tests: do not pass iso8859-1 encoded parameter
2014-07-30 14:21:09 -07:00
Karsten Blees
77734da241 Win32: don't copy the environment twice when spawning child processes
When spawning child processes via start_command(), the environment and all
environment entries are copied twice. First by make_augmented_environ /
copy_environ to merge with child_process.env. Then a second time by
make_environment_block to create a sorted environment block string as
required by CreateProcess.

Move the merge logic to make_environment_block so that we only need to copy
the environment once. This changes semantics of the env parameter: it now
expects a delta (such as child_process.env) rather than a full environment.
This is not a problem as the parameter is only used by start_command()
(all other callers previously passed char **environ, and now pass NULL).

The merge logic no longer xstrdup()s the environment strings, so do_putenv
must not free them. Add a parameter to distinguish this from normal putenv.

Remove the now unused make_augmented_environ / free_environ API.

Signed-off-by: Karsten Blees <blees@dcon.de>
Signed-off-by: Stepan Kasal <kasal@ucw.cz>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-07-21 09:32:49 -07:00
René Scharfe
d1d094564a run-command: use internal argv_array of struct child_process in run_hook_ve()
Use the existing argv_array member instead of providing our own.  This
way we don't have to initialize or clean it up explicitly.

Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-07-17 15:09:24 -07:00
Jeff King
c460c0ecdc run-command: store an optional argv_array
All child_process structs need to point to an argv. For
flexibility, we do not mandate the use of a dynamic
argv_array. However, because the child_process does not own
the memory, this can make memory management with a
separate argv_array difficult.

For example, if a function calls start_command but not
finish_command, the argv memory must persist. The code needs
to arrange to clean up the argv_array separately after
finish_command runs. As a result, some of our code in this
situation just leaks the memory.

To help such cases, this patch adds a built-in argv_array to
the child_process, which gets cleaned up automatically (both
in finish_command and when start_command fails).  Callers
may use it if they choose, but can continue to use the raw
argv if they wish.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-05-15 09:49:09 -07:00
Benoit Pierre
15048f8a9a commit: fix patch hunk editing with "commit -p -m"
Don't change git environment: move the GIT_EDITOR=":" override to the
hook command subprocess, like it's already done for GIT_INDEX_FILE.

Signed-off-by: Benoit Pierre <benoit.pierre@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-03-18 11:25:12 -07:00
Felipe Contreras
5a50085c6b run-command: trivial style fixes
Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-10-31 13:48:26 -07:00
Junio C Hamano
1d1934caf1 Merge branch 'tr/fd-gotcha-fixes'
Two places we did not check return value (expected to be a file
descriptor) correctly.

* tr/fd-gotcha-fixes:
  run-command: dup_devnull(): guard against syscalls failing
  git_mkstemps: correctly test return value of open()
2013-07-22 11:23:13 -07:00
Thomas Rast
a77f106c78 run-command: dup_devnull(): guard against syscalls failing
dup_devnull() did not check the return values of open() and dup2().
Fix this omission.

Signed-off-by: Thomas Rast <trast@inf.ethz.ch>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-12 10:30:09 -07:00
Jonathan Nieder
380395d094 mingw: rename WIN32 cpp macro to GIT_WINDOWS_NATIVE
Throughout git, it is assumed that the WIN32 preprocessor symbol is
defined on native Windows setups (mingw and msvc) and not on Cygwin.
On Cygwin, most of the time git can pretend this is just another Unix
machine, and Windows-specific magic is generally counterproductive.

Unfortunately Cygwin *does* define the WIN32 symbol in some headers.
Best to rely on a new git-specific symbol GIT_WINDOWS_NATIVE instead,
defined as follows:

	#if defined(WIN32) && !defined(__CYGWIN__)
	# define GIT_WINDOWS_NATIVE
	#endif

After this change, it should be possible to drop the
CYGWIN_V15_WIN32API setting without any negative effect.

[rj: %s/WINDOWS_NATIVE/GIT_WINDOWS_NATIVE/g ]

Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-05-08 12:14:35 -07:00
Junio C Hamano
9526aa461f Merge branch 'jk/a-thread-only-dies-once'
A regression fix for the logic to detect die() handler triggering
itself recursively.

* jk/a-thread-only-dies-once:
  run-command: use thread-aware die_is_recursing routine
  usage: allow pluggable die-recursion checks
2013-04-19 13:45:05 -07:00
Jeff King
1ece66bc9e run-command: use thread-aware die_is_recursing routine
If we die from an async thread, we do not actually exit the
program, but just kill the thread. This confuses the static
counter in usage.c's default die_is_recursing function; it
updates the counter once for the thread death, and then when
the main program calls die() itself, it erroneously thinks
we are recursing. The end result is that we print "recursion
detected in die handler" instead of the real error in such a
case (the easiest way to trigger this is having a remote
connection hang up while running a sideband demultiplexer).

This patch solves it by using a per-thread counter when the
async_die function is installed; we detect recursion in each
thread (including the main one), but they do not step on
each other's toes.

Other threaded code does not need to worry about this, as
they do not install specialized die handlers; they just let
a die() from a sub-thread take down the whole program.

Since we are overriding the default recursion-check
function, there is an interesting corner case that is not a
problem, but bears some explanation. Imagine the main thread
calls die(), and then in the die_routine starts an async
call. We will switch to using thread-local storage, which
starts at 0, for the main thread's counter, even though
the original counter was actually at 1. That's OK, though,
for two reasons:

  1. It would miss only the first level of recursion, and
     would still find recursive failures inside the async
     helper.

  2. We do not currently and are not likely to start doing
     anything as heavyweight as starting an async routine
     from within a die routine or helper function.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-16 15:02:48 -07:00
Jeff King
25043d8aea run-command: always set failed_errno in start_command
When we fail to fork, we set the failed_errno variable to
the value of errno so it is not clobbered by later syscalls.
However, we do so in a conditional, and it is hard to see
later under what conditions the variable has a valid value.

Instead of setting it only when fork fails, let's just
always set it after forking. This is more obvious for human
readers (as we are no longer setting it as a side effect of
a strerror call), and it is more obvious to gcc, which no
longer generates a spurious -Wuninitialized warning. It also
happens to match what the WIN32 half of the #ifdef does.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-03-21 14:06:48 -07:00
Junio C Hamano
b5b56ea40c Merge branch 'sb/run-command-fd-error-reporting'
* sb/run-command-fd-error-reporting:
  run-command: be more informative about what failed
2013-02-07 14:41:42 -08:00
Stephen Boyd
939296c4a4 run-command: be more informative about what failed
While debugging an error with verify_signed_buffer() the error
messages from run-command weren't very useful:

 error: cannot create pipe for gpg: Too many open files
 error: could not run gpg.

because they didn't indicate *which* pipe couldn't be created.

Print which pipe failed to be created in the error message so we
can more easily debug similar problems in the future.

For example, the above error now prints:

 error: cannot create standard error pipe for gpg: Too many open files
 error: could not run gpg.

Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-01 14:11:50 -08:00
Aaron Schrab
5a7da2dca1 hooks: Add function to check if a hook exists
Create find_hook() function to determine if a given hook exists and is
executable.  If it is, the path to the script will be returned,
otherwise NULL is returned.

This encapsulates the tests that are used to check for the existence of
a hook in one place, making it easier to modify those checks if that is
found to be necessary.  This also makes it simple for places that can
use a hook to check if a hook exists before doing, possibly lengthy,
setup work which would be pointless if no such hook is present.

The returned value is left as a static value from get_pathname() rather
than a duplicate because it is anticipated that the return value will
either be used as a boolean, immediately added to an argv_array list
which would result in it being duplicated at that point, or used to
actually run the command without much intervening work.  Callers which
need to hold onto the returned value for a longer time are expected to
duplicate the return value themselves.

Signed-off-by: Aaron Schrab <aaron@schrab.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-01-14 09:25:40 -08:00
Jeff King
709ca730f8 run-command: encode signal death as a positive integer
When a sub-command dies due to a signal, we encode the
signal number into the numeric exit status as "signal -
128". This is easy to identify (versus a regular positive
error code), and when cast to an unsigned integer (e.g., by
feeding it to exit), matches what a POSIX shell would return
when reporting a signal death in $? or through its own exit
code.

So we have a negative value inside the code, but once it
passes across an exit() barrier, it looks positive (and any
code we receive from a sub-shell will have the positive
form). E.g., death by SIGPIPE (signal 13) will look like
-115 to us in inside git, but will end up as 141 when we
call exit() with it. And a program killed by SIGPIPE but run
via the shell will come to us with an exit code of 141.

Unfortunately, this means that when the "use_shell" option
is set, we need to be on the lookout for _both_ forms. We
might or might not have actually invoked the shell (because
we optimize out some useless shell calls). If we didn't invoke
the shell, we will will see the sub-process's signal death
directly, and run-command converts it into a negative value.
But if we did invoke the shell, we will see the shell's
128+signal exit status. To be thorough, we would need to
check both, or cast the value to an unsigned char (after
checking that it is not -1, which is a magic error value).

Fortunately, most callsites do not care at all whether the
exit was from a code or from a signal; they merely check for
a non-zero status, and sometimes propagate the error via
exit(). But for the callers that do care, we can make life
slightly easier by just using the consistent positive form.

This actually fixes two minor bugs:

  1. In launch_editor, we check whether the editor died from
     SIGINT or SIGQUIT. But we checked only the negative
     form, meaning that we would fail to notice a signal
     death exit code which was propagated through the shell.

  2. In handle_alias, we assume that a negative return value
     from run_command means that errno tells us something
     interesting (like a fork failure, or ENOENT).
     Otherwise, we simply propagate the exit code. Negative
     signal death codes confuse us, and we print a useless
     "unable to run alias 'foo': Success" message. By
     encoding signal deaths using the positive form, the
     existing code just propagates it as it would a normal
     non-zero exit code.

The downside is that callers of run_command can no longer
differentiate between a signal received directly by the
sub-process, and one propagated. However, no caller
currently cares, and since we already optimize out some
calls to the shell under the hood, that distinction is not
something that should be relied upon by callers.

Fix the same logic in t/test-terminal.perl for consistency [jc:
raised by Jonathan in the discussion].

Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Johannes Sixt <j6t@kdbg.org>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-01-06 11:09:18 -08:00
Jeff King
0398fc3496 fix compilation with NO_PTHREADS
Commit 1327452 cleaned up an unused parameter from
wait_or_whine, but forgot to update a caller that is inside
"#ifdef NO_PTHREADS".

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-01-05 22:47:27 -08:00
Jeff King
a2767c5c91 run-command: do not warn about child death from terminal
SIGINT and SIGQUIT are not generally interesting signals to
the user, since they are typically caused by them hitting "^C"
or otherwise telling their terminal to send the signal.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-12-02 02:06:43 -08:00
Jeff King
13274526c1 run-command: drop silent_exec_failure arg from wait_or_whine
We do not actually use this parameter; instead we complain
from the child itself (for fork/exec) or from start_command
(if we are using spawn on Windows).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-12-02 02:04:50 -08:00
Jeff King
55ff630075 Merge branch 'jk/no-more-pre-exec-callback'
Removes a workaround for buggy version of less older than version
406.

* jk/no-more-pre-exec-callback:
  pager: drop "wait for output to run less" hack
2012-10-25 06:41:15 -04:00
Junio C Hamano
cc84144d48 Merge branch 'dg/run-command-child-cleanup' into maint
* dg/run-command-child-cleanup:
  run-command.c: fix broken list iteration in clear_child_for_cleanup
2012-09-20 15:55:12 -07:00
Junio C Hamano
5816cc7ca1 Merge branch 'dg/run-command-child-cleanup'
The code to wait for subprocess and remove it from our internal queue
wasn't quite right.

* dg/run-command-child-cleanup:
  run-command.c: fix broken list iteration in clear_child_for_cleanup
2012-09-14 21:39:37 -07:00
Junio C Hamano
91feb387f2 Merge branch 'jc/maint-sane-execvp-notdir' into maint-1.7.11
* jc/maint-sane-execvp-notdir:
  sane_execvp(): ignore non-directory on $PATH
2012-09-11 11:09:19 -07:00
David Gould
bdee397d7c run-command.c: fix broken list iteration in clear_child_for_cleanup
Iterate through children_to_clean using 'next' fields but with an
extra level of indirection. This allows us to update the chain when
we remove a child and saves us managing several variables around
the loop mechanism.

Signed-off-by: David Gould <david@optimisefitness.com>
Acked-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-09-11 10:30:31 -07:00
Junio C Hamano
12d858aeb4 Merge branch 'jc/maint-sane-execvp-notdir'
"git foo" errored out with "Not a directory" when the user had a non
directory on $PATH, and worse yet it masked an alias "foo" to run.

* jc/maint-sane-execvp-notdir:
  sane_execvp(): ignore non-directory on $PATH
2012-09-03 15:53:26 -07:00
Junio C Hamano
a78550831a sane_execvp(): ignore non-directory on $PATH
When you have a non-directory on your PATH, a funny thing happens:

	$ PATH=$PATH:/bin/sh git foo
	fatal: cannot exec 'git-foo': Not a directory?

Worse yet, as real commands always take precedence over aliases,
this behaviour interacts rather badly with them:

	$ PATH=$PATH:/bin/sh git -c alias.foo=show git foo -s
	fatal: cannot exec 'git-foo': Not a directory?

This is because an ENOTDIR error from the underlying execvp(2) is
reported back to the caller of our sane_execvp() wrapper as-is.

Translating it to ENOENT, just like the case where we _might_ have
the command in an unreadable directory, fixes it.  Without an alias,
we would get

	git: 'foo' is not a git command. See 'git --help'.

and we use the 'foo' alias when it is available, of course.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-07-31 12:51:30 -07:00
Jeff King
e8320f350f pager: drop "wait for output to run less" hack
Commit 35ce862 (pager: Work around window resizing bug in
'less', 2007-01-24) causes git's pager sub-process to wait
to receive input after forking but before exec-ing the
pager. To handle this, run-command had to grow a "pre-exec
callback" feature. Unfortunately, this feature does not work
at all on Windows (where we do not fork), and interacts
poorly with run-command's parent notification system. Its
use should be discouraged.

The bug in less was fixed in version 406, which was released
in June 2007. It is probably safe at this point to remove
our workaround. That lets us rip out the preexec_cb feature
entirely.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-06-05 09:38:00 -07:00
Junio C Hamano
8cc5223495 Merge branch 'js/spawn-via-shell-path-fix'
Mops up an unfortunate fallout from bw/spawn-via-shell-path topic.

By Johannes Sixt
* js/spawn-via-shell-path-fix:
  Do not use SHELL_PATH from build system in prepare_shell_cmd on Windows
2012-04-20 15:51:18 -07:00
Junio C Hamano
bd6f71d1fc Merge branch 'jk/run-command-eacces'
When PATH contains an unreadable directory, alias expansion code did not
kick in, and failed with an error that said "git-subcmd" was not found.

By Jeff King (1) and Ramsay Jones (1)
* jk/run-command-eacces:
  run-command: treat inaccessible directories as ENOENT
  compat/mingw.[ch]: Change return type of exec functions to int
2012-04-20 15:50:03 -07:00
Johannes Sixt
776297548e Do not use SHELL_PATH from build system in prepare_shell_cmd on Windows
The recent change to use SHELL_PATH instead of "sh" to spawn shell commands
is not suited for Windows:

- The default setting, "/bin/sh", does not work when git has to run the
  shell because it is a POSIX style path, but not a proper Windows style
  path.

- If it worked, it would hard-code a position in the files system where
  the shell is expected, making git (more precisely, the POSIX toolset that
  is needed alongside git) non-relocatable. But we cannot sacrifice
  relocatability on Windows.

- Apart from that, even though the Makefile leaves SHELL_PATH set to
  "/bin/sh" for the Windows builds, the build system passes a mangled path
  to the compiler, and something like "D:/Src/msysgit/bin/sh" is used,
  which is doubly bad because it points to where /bin/sh resolves to on
  the system where git was built.

- Finally, the system's CreateProcess() function that is used under
  mingw.c's hood does not work with forward slashes and cannot find the
  shell.

Undo the earlier change on Windows.

Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-17 08:51:54 -07:00
Jeff King
38f865c27d run-command: treat inaccessible directories as ENOENT
When execvp reports EACCES, it can be one of two things:

  1. We found a file to execute, but did not have
     permissions to do so.

  2. We did not have permissions to look in some directory
     in the $PATH.

In the former case, we want to consider this a
permissions problem and report it to the user as such (since
getting this for something like "git foo" is likely a
configuration error).

In the latter case, there is a good chance that the
inaccessible directory does not contain anything of
interest. Reporting "permission denied" is confusing to the
user (and prevents our usual "did you mean...?" lookup). It
also prevents git from trying alias lookup, since we do so
only when an external command does not exist (not when it
exists but has an error).

This patch detects EACCES from execvp, checks whether we are
in case (2), and if so converts errno to ENOENT. This
behavior matches that of "bash" (but not of simpler shells
that use execvp more directly, like "dash").

Test stolen from Junio.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-05 16:24:13 -07:00
Ben Walton
b3e34dddc0 Use SHELL_PATH from build system in run_command.c:prepare_shell_cmd
During the testing of the 1.7.10 rc series on Solaris for OpenCSW, it
was discovered that t7006-pager was failing due to finding a bad "sh"
in PATH after a call to execvp("sh", ...).  This call was setup by
run_command.c:prepare_shell_cmd.

The PATH in use at the time saw /opt/csw/bin given precedence to
traditional Solaris paths such as /usr/bin and /usr/xpg4/bin.  A
package named schilyutils (Joerg Schilling's utilities) was installed
on the build system and it delivered a modified version of the
traditional Solaris /usr/bin/sh as /opt/csw/bin/sh.  This version of
sh suffers from many of the same problems as /usr/bin/sh.

The command-specific pager test failed due to the broken "sh" handling
^ as a pipe character.  It tried to fork two processes when it
encountered "sed s/^/foo:/" as the pager command.  This problem was
entirely dependent on the PATH of the user at runtime.

Possible fixes for this issue are:

1. Use the standard system() or popen() which both launch a POSIX
   shell on Solaris as long as _POSIX_SOURCE is defined.

2. The git wrapper could prepend SANE_TOOL_PATH to PATH thus forcing
   all unqualified commands run to use the known good tools on the
   system.

3. The run_command.c:prepare_shell_command() could use the same
   SHELL_PATH that is in the #! line of all all scripts and not rely
   on PATH to find the sh to run.

Option 1 would preclude opening a bidirectional pipe to a filter
script and would also break git for Windows as cmd.exe is spawned from
system() (cf. v1.7.5-rc0~144^2, "alias: use run_command api to execute
aliases, 2011-01-07).

Option 2 is not friendly to users as it would negate their ability to
use tools of their choice in many cases.  Alternately, injecting
SANE_TOOL_PATH such that it takes precedence over /bin and /usr/bin
(and anything with lower precedence than those paths) as
git-sh-setup.sh does would not solve the problem either as the user
environment could still allow a bad sh to be found.  (Many OpenCSW
users will have /opt/csw/bin leading their PATH and some subset would
have schilyutils installed.)

Option 3 allows us to use a known good shell while still honouring the
users' PATH for the utilities being run.  Thus, it solves the problem
while not negatively impacting either users or git's ability to run
external commands in convenient ways.  Essentially, the shell is a
special case of tool that should not rely on SANE_TOOL_PATH and must
be called explicitly.

With this patch applied, any code path leading to
run_command.c:prepare_shell_cmd can count on using the same sane shell
that all shell scripts in the git suite use.  Both the build system
and run_command.c will default this shell to /bin/sh unless
overridden.

Signed-off-by: Ben Walton <bwalton@artsci.utoronto.ca>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-03 17:24:20 -07:00
Clemens Buchacher
10c6cddd92 dashed externals: kill children on exit
Several git commands are so-called dashed externals, that is commands
executed as a child process of the git wrapper command. If the git
wrapper is killed by a signal, the child process will continue to run.
This is different from internal commands, which always die with the git
wrapper command.

Enable the recently introduced cleanup mechanism for child processes in
order to make dashed externals act more in line with internal commands.

Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Acked-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-01-08 15:07:20 -08:00
Jeff King
afe19ff7b5 run-command: optionally kill children on exit
When we spawn a helper process, it should generally be done
and finish_command called before we exit. However, if we
exit abnormally due to an early return or a signal, the
helper may continue to run in our absence.

In the best case, this may simply be wasted CPU cycles or a
few stray messages on a terminal. But it could also mean a
process that the user thought was aborted continues to run
to completion (e.g., a push's pack-objects helper will
complete the push, even though you killed the push process).

This patch provides infrastructure for run-command to keep
track of PIDs to be killed, and clean them on signal
reception or input, just as we do with tempfiles. PIDs can
be added in two ways:

  1. If NO_PTHREADS is defined, async helper processes are
     automatically marked. By definition this code must be
     ready to die when the parent dies, since it may be
     implemented as a thread of the parent process.

  2. If the run-command caller specifies the "clean_on_exit"
     option. This is not the default, as there are cases
     where it is OK for the child to outlive us (e.g., when
     spawning a pager).

PIDs are cleared from the kill-list automatically during
wait_or_whine, which is called from finish_command and
finish_async.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-01-08 15:06:35 -08:00
Junio C Hamano
7a95d1be03 Merge branch 'jk/argv-array'
* jk/argv-array:
  run_hook: use argv_array API
  checkout: use argv_array API
  bisect: use argv_array API
  quote: provide sq_dequote_to_argv_array
  refactor argv_array into generic code
  quote.h: fix bogus comment
  add sha1_array API docs
2011-10-05 12:36:24 -07:00
Jeff King
5d40a17985 run_hook: use argv_array API
This was a pretty straightforward use, so it really doesn't
save that many lines. Still, perhaps it's a little bit more
readable.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-09-14 11:57:33 -07:00
Clemens Buchacher
fc1b56f054 notice error exit from pager
If the pager fails to run, git produces no output, e.g.:

 $ GIT_PAGER=not-a-command git log

The error reporting fails for two reasons:

 (1) start_command: There is a mechanism that detects errors during
     execvp introduced in 2b541bf8 (start_command: detect execvp
     failures early). The child writes one byte to a pipe only if
     execvp fails.  The parent waits for either EOF, when the
     successful execvp automatically closes the pipe (see
     FD_CLOEXEC in fcntl(1)), or it reads a single byte, in which
     case it knows that the execvp failed. This mechanism is
     incompatible with the workaround introduced in 35ce8622
     (pager: Work around window resizing bug in 'less'), which
     waits for input from the parent before the exec. Since both
     the parent and the child are waiting for input from each
     other, that would result in a deadlock. In order to avoid
     that, the mechanism is disabled by closing the child_notifier
     file descriptor.

 (2) finish_command: The parent correctly detects the 127 exit
     status from the child, but the error output goes nowhere,
     since by that time it is already being redirected to the
     child.

No simple solution for (1) comes to mind.

Number (2) can be solved by not sending error output to the pager.
Not redirecting error output to the pager can result in the pager
overwriting error output with standard output, however.

Since there is no reliable way to handle error reporting in the
parent, produce the output in the child instead.

Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-08-01 16:21:55 -07:00
Clemens Buchacher
3bc4181fde error_routine: use parent's stderr if exec fails
The new process's error output may be redirected elsewhere, but if
the exec fails, output should still go to the parent's stderr. This
has already been done for the die_routine. Do the same for
error_routine.

Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-07-31 18:27:07 -07:00
Jonathan Nieder
a111eb7808 run-command: handle short writes and EINTR in die_child
If start_command fails after forking and before exec finishes, there
is not much use in noticing an I/O error on top of that.
finish_command will notice that the child exited with nonzero status
anyway.  So as noted in v1.7.0.3~20^2 (run-command.c: fix build
warnings on Ubuntu, 2010-01-30) and v1.7.5-rc0~29^2 (2011-03-16), it
is safe to ignore errors from write in this codepath.

Even so, the result from write contains useful information: it tells
us if the write was cancelled by a signal (EINTR) or was only
partially completed (e.g., when writing to an almost-full pipe).
Let's use write_in_full to loop until the desired number of bytes have
been written (still ignoring errors if that fails).

As a happy side effect, the assignment to a dummy variable to appease
gcc -D_FORTIFY_SOURCE is no longer needed.  xwrite and write_in_full
check the return value from write(2).

Noticed with gcc -Wunused-but-set-variable.

Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-04-20 10:09:26 -07:00
Junio C Hamano
60e199c4d5 Revert "run-command: prettify -D_FORTIFY_SOURCE workaround"
This reverts commit ebec842773, which
somehow mistakenly thought that any non-zero return from write(2) is
an error.
2011-04-18 14:14:53 -07:00
Jonathan Nieder
ebec842773 run-command: prettify -D_FORTIFY_SOURCE workaround
Current gcc + glibc with -D_FORTIFY_SOURCE try very aggressively to
protect against a programming style which uses write(...) without
checking the return value for errors.  Even the usual hint of casting
to (void) does not suppress the warning.

Sometimes when there is an output error, especially right before exit,
there really is nothing to be done.  The obvious solution, adopted in
v1.7.0.3~20^2 (run-command.c: fix build warnings on Ubuntu,
2010-01-30), is to save the return value to a dummy variable:

	ssize_t dummy;
	dummy = write(...);

But that (1) is ugly and (2) triggers -Wunused-but-set-variable
warnings with gcc-4.6 -Wall, so we are not much better off than when
we started.

Instead, use an "if" statement with an empty body to make the intent
clear.

	if (write(...))
		; /* yes, yes, there was an error. */

Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-03-17 15:32:43 -07:00
Johannes Sixt
13af8cbd6a start_command: flush buffers in the WIN32 code path as well
The POSIX code path did The Right Thing already, but we have to do the same
on Windows.

This bug caused failures in t5526-fetch-submodules, where the output of
'git fetch --recurse-submodules' was in the wrong order.

Debugged-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-02-07 14:18:56 -08:00
Junio C Hamano
762655010d Merge branch 'js/async-thread'
* js/async-thread:
  fast-import: die_nicely() back to vsnprintf (reverts part of ebaa79f)
  Enable threaded async procedures whenever pthreads is available
  Dying in an async procedure should only exit the thread, not the process.
  Reimplement async procedures using pthreads
  Windows: more pthreads functions
  Fix signature of fcntl() compatibility dummy
  Make report() from usage.c public as vreportf() and use it.
  Modernize t5530-upload-pack-error.

Conflicts:
	http-backend.c
2010-06-21 06:02:45 -07:00
bert Dvornik
fc012c2810 start_command: close cmd->err descriptor when fork/spawn fails
Fix the problem where the cmd->err passed into start_command wasn't
being properly closed when certain types of errors occurr.  (Compare
the affected code with the clean shutdown code later in the function.)

On Windows, this problem would be triggered if mingw_spawnvpe()
failed, which would happen if the command to be executed was malformed
(e.g. a text file that didn't start with a #! line).  If cmd->err was
a pipe, the failure to close it could result in a hang while the other
side was waiting (forever) for either input or pipe close, e.g. while
trying to shove the output into the side band.  On msysGit, this
problem was causing a hang in t5516-fetch-push.

[J6t: With a slight adjustment of the test case, the hang is also
observed on Linux.]

Signed-off-by: bert Dvornik <dvornik+git@gmail.com>
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-05-20 16:11:29 -07:00
Junio C Hamano
4553d58f37 Merge branch 'jl/maint-submodule-gitfile-awareness'
* jl/maint-submodule-gitfile-awareness:
  Windows: start_command: Support non-NULL dir in struct child_process
2010-04-11 13:54:28 -07:00
Johannes Sixt
f9a2743c35 Windows: start_command: Support non-NULL dir in struct child_process
A caller of start_command can set the member 'dir' to a directory to
request that the child process starts with that directory as CWD. The first
user of this feature was added recently in eee49b6 (Teach diff --submodule
and status to handle .git files in submodules).

On Windows, we have been lazy and had not implemented support for this
feature, yet. This fixes the shortcoming.

Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-04-11 13:48:46 -07:00
Johannes Sixt
f6b6098316 Enable threaded async procedures whenever pthreads is available
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-03-10 14:26:54 -08:00
Junio C Hamano
b7e7f6fb00 Merge branch 'mw/maint-gcc-warns-unused-write'
* mw/maint-gcc-warns-unused-write:
  run-command.c: fix build warnings on Ubuntu
2010-03-07 12:47:18 -08:00
Johannes Sixt
0ea1c89ba6 Dying in an async procedure should only exit the thread, not the process.
Async procedures are intended as helpers that perform a very restricted
task, and the caller usually has to manage them in a larger context.
Conceptually, the async procedure is not concerned with the "bigger
picture" in whose context it is run. When it dies, it is not supposed
to destroy this "bigger picture", but rather only its own limit view
of the world. On POSIX, the async procedure is run in its own process,
and exiting this process naturally had only these limited effects.

On Windows (or when ASYNC_AS_THREAD is set), calling die() exited the
whole process, destroying the caller (the "big picture") as well.
This fixes it to exit only the thread.

Without ASYNC_AS_THREAD, one particular effect of exiting the async
procedure process is that it automatically closes file descriptors, most
notably the writable end of the pipe that the async procedure writes to.

The async API already requires that the async procedure closes the pipe
ends when it exits normally. But for calls to die() no requirements are
imposed. In the non-threaded case the pipe ends are closed implicitly
by the exiting process, but in the threaded case, the die routine must
take care of closing them.

Now t5530-upload-pack-error.sh passes on Windows.

Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-03-07 00:37:36 -08:00
Johannes Sixt
200a76b74d Reimplement async procedures using pthreads
On Windows, async procedures have always been run in threads, and the
implementation used Windows specific APIs. Rewrite the code to use pthreads.

A new configuration option is introduced so that the threaded implementation
can also be used on POSIX systems. Since this option is intended only as
playground on POSIX, but is mandatory on Windows, the option is not
documented.

One detail is that on POSIX it is necessary to set FD_CLOEXEC on the pipe
handles. On Windows, this is not needed because pipe handles are not
inherited to child processes, and the new calls to set_cloexec() are
effectively no-ops.

Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-03-07 00:37:36 -08:00
Michael Wookey
90ff12a860 run-command.c: fix build warnings on Ubuntu
Building git on Ubuntu 9.10 warns that the return value of write(2)
isn't checked. These warnings were introduced in commits:

  2b541bf8 ("start_command: detect execvp failures early")
  a5487ddf ("start_command: report child process setup errors to the
parent's stderr")

GCC details:

  $ gcc --version
  gcc (Ubuntu 4.4.1-4ubuntu9) 4.4.1

Silence the warnings by reading (but not making use of) the return value
of write(2).

Signed-off-by: Michael Wookey <michaelwookey@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-03-03 22:47:24 -08:00
Junio C Hamano
76d44c8cfd Merge branch 'sp/maint-push-sideband' into sp/push-sideband
* sp/maint-push-sideband:
  receive-pack: Send hook output over side band #2
  receive-pack: Wrap status reports inside side-band-64k
  receive-pack: Refactor how capabilities are shown to the client
  send-pack: demultiplex a sideband stream with status data
  run-command: support custom fd-set in async
  run-command: Allow stderr to be a caller supplied pipe
  Update git fsck --full short description to mention packs

Conflicts:
	run-command.c
2010-02-05 21:08:53 -08:00
Erik Faye-Lund
ae6a5609c0 run-command: support custom fd-set in async
This patch adds the possibility to supply a set of non-0 file
descriptors for async process communication instead of the
default-created pipe.

Additionally, we now support bi-directional communiction with the
async procedure, by giving the async function both read and write
file descriptors.

To retain compatiblity and similar "API feel" with start_command,
we require start_async callers to set .out = -1 to get a readable
file descriptor.  If either of .in or .out is 0, we supply no file
descriptor to the async process.

[sp: Note: Erik started this patch, and a huge bulk of it is
     his work.  All bugs were introduced later by Shawn.]

Signed-off-by: Erik Faye-Lund <kusmabite@gmail.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-05 20:57:22 -08:00
Shawn O. Pearce
4f41b61148 run-command: Allow stderr to be a caller supplied pipe
Like .out, .err may now be set to a file descriptor > 0, which
is a writable pipe/socket/file that the child's stderr will be
redirected into.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-05 20:57:16 -08:00
Junio C Hamano
030b1a77f7 Merge branch 'js/exec-error-report'
* js/exec-error-report:
  Improve error message when a transport helper was not found
  start_command: detect execvp failures early
  run-command: move wait_or_whine earlier
  start_command: report child process setup errors to the parent's stderr

Conflicts:
	Makefile
2010-01-20 14:44:12 -08:00
Junio C Hamano
3cd02df46a Merge branch 'js/windows'
* js/windows:
  Do not use date.c:tm_to_time_t() from compat/mingw.c
  MSVC: Windows-native implementation for subset of Pthreads API
  MSVC: Fix an "incompatible pointer types" compiler warning
  Windows: avoid the "dup dance" when spawning a child process
  Windows: simplify the pipe(2) implementation
  Windows: boost startup by avoiding a static dependency on shell32.dll
  Windows: disable Python
2010-01-18 18:12:49 -08:00
Johannes Sixt
75301f9015 Windows: avoid the "dup dance" when spawning a child process
When stdin, stdout, or stderr must be redirected for a child process that
on Windows is spawned using one of the spawn() functions of Microsoft's
C runtime, then there is no choice other than to

1. make a backup copy of fd 0,1,2 with dup
2. dup2 the redirection source fd into 0,1,2
3. spawn
4. dup2 the backup back into 0,1,2
5. close the backup copy and the redirection source

We used this idiom as well -- but we are not using the spawn() functions
anymore!

Instead, we have our own implementation. We had hardcoded that stdin,
stdout, and stderr of the child process were inherited from the parent's
fds 0, 1, and 2. But we can actually specify any fd.

With this patch, the fds to inherit are passed from start_command()'s
WIN32 section to our spawn implementation. This way, we can avoid the
backup copies of the fds.

The backup copies were a bug waiting to surface: The OS handles underlying
the dup()ed fds were inherited by the child process (but were not
associated with a file descriptor in the child). Consequently, the file or
pipe represented by the OS handle remained open even after the backup copy
was closed in the parent process until the child exited.

Since our implementation of pipe() creates non-inheritable OS handles, we
still dup() file descriptors in start_command() because dup() happens to
create inheritable duplicates. (A nice side effect is that the fd cleanup
in start_command is the same for Windows and Unix and remains unchanged.)

Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-01-16 16:43:53 -08:00
Johannes Sixt
2b541bf8be start_command: detect execvp failures early
Previously, failures during execvp could be detected only by
finish_command. However, in some situations it is beneficial for the
parent process to know earlier that the child process will not run.

The idea to use a pipe to signal failures to the parent process and
the test case were lifted from patches by Ilari Liusvaara.

Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-01-10 10:15:03 -08:00
Johannes Sixt
ab0b41daf6 run-command: move wait_or_whine earlier
We want to reuse it from start_command.

Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-01-10 10:05:52 -08:00
Johannes Sixt
a5487ddf0f start_command: report child process setup errors to the parent's stderr
When the child process's environment is set up in start_command(), error
messages were written to wherever the parent redirected the child's stderr
channel. However, even if the parent redirected the child's stderr, errors
during this setup process, including the exec itself, are usually an
indication of a problem in the parent's environment. Therefore, the error
messages should go to the parent's stderr.

Redirection of the child's error messages is usually only used to redirect
hook error messages during client-server exchanges. In these cases, hook
setup errors could be regarded as information leak.

This patch makes a copy of stderr if necessary and uses a special
die routine that is used for all die() calls in the child that sends the
errors messages to the parent's stderr.

The trace call that reported a failed execvp is removed (because it writes
to stderr) and replaced by die_errno() with special treatment of ENOENT.
The improvement in the error message can be seen with this sequence:

   mkdir .git/hooks/pre-commit
   git commit

Previously, the error message was

   error: cannot run .git/hooks/pre-commit: No such file or directory

and now it is

   fatal: cannot exec '.git/hooks/pre-commit': Permission denied

Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-01-10 10:05:34 -08:00
Jeff King
f445644fd2 run-command: optimize out useless shell calls
If there are no metacharacters in the program to be run, we
can just skip running the shell entirely and directly exec
the program.

The metacharacter test is pulled verbatim from
launch_editor, which already implements this optimization.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-01-05 23:41:50 -08:00
Jeff King
8dba1e634a run-command: add "use shell" option
Many callsites run "sh -c $CMD" to run $CMD. We can make it
a little simpler for them by factoring out the munging of
argv.

For simple cases with no arguments, this doesn't help much, but:

  1. For cases with arguments, we save the caller from
     having to build the appropriate shell snippet.

  2. We can later optimize to avoid the shell when
     there are no metacharacters in the program.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-01-01 17:53:46 -08:00
Frank Li
71064e3f86 Test for WIN32 instead of __MINGW32_
The code which is conditional on MinGW32 is actually conditional on Windows.
Use the WIN32 symbol, which is defined by the MINGW32 and MSVC environments,
but not by Cygwin.

Define SNPRINTF_SIZE_CORR=1 for MSVC too, as its vsnprintf function does
not add NUL at the end of the buffer if the result fits the buffer size
exactly.

Signed-off-by: Frank Li <lznuaa@gmail.com>
Signed-off-by: Marius Storm-Olsen <mstormo@gmail.com>
Acked-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-09-18 20:00:42 -07:00
Frank Li
d7fa500fb5 Fix __stdcall placement and function prototype
MSVC requires __stdcall to be between the functions return value and the
function name, and that the function pointer type is in the form of

    return_type (WINAPI *function_name)(arguments...)

Signed-off-by: Frank Li <lznuaa@gmail.com>
Signed-off-by: Marius Storm-Olsen <mstormo@gmail.com>
Acked-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-09-18 20:00:42 -07:00
Frank Li
0d30ad71fa Avoid declaration after statement
MSVC does not understand this C99 style.

Signed-off-by: Frank Li <lznuaa@gmail.com>
Signed-off-by: Marius Storm-Olsen <mstormo@gmail.com>
Acked-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-09-18 20:00:41 -07:00
Johannes Sixt
2affea4125 start_command: do not clobber cmd->env on Windows code path
Previously, it would not be possible to call start_command twice for the
same struct child_process that has env set.

The fix is achieved by moving the loop that modifies the environment block
into a helper function. This also allows us to make two other helper
functions static.

Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-09-11 16:33:54 -07:00
Junio C Hamano
08ac69685a Merge branch 'js/run-command-updates'
* js/run-command-updates:
  api-run-command.txt: describe error behavior of run_command functions
  run-command.c: squelch a "use before assignment" warning
  receive-pack: remove unnecessary run_status report
  run_command: report failure to execute the program, but optionally don't
  run_command: encode deadly signal number in the return value
  run_command: report system call errors instead of returning error codes
  run_command: return exit code as positive value
  MinGW: simplify waitpid() emulation macros
2009-08-10 22:14:57 -07:00
David Soria Parra
5a7a3671b7 run-command.c: squelch a "use before assignment" warning
i686-apple-darwin9-gcc-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5490) compiler
(and probably others) mistakenly thinks variable failed_errno is used
before assigned.  Work it around by giving it a fake initialization.

Signed-off-by: David Soria Parra <dsp@php.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-08-04 10:04:29 -07:00
Johannes Sixt
c024beb56d run_command: report failure to execute the program, but optionally don't
In the case where a program was not found, it was still the task of the
caller to report an error to the user. Usually, this is an interesting case
but only few callers actually reported a specific error (though many call
sites report a generic error message regardless of the cause).

With this change the error is reported by run_command, but since there is
one call site in git.c that does not want that, an option is added to
struct child_process, which is used to turn the error off.

Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-07-06 02:45:50 -07:00
Johannes Sixt
b99d5f40d6 run_command: encode deadly signal number in the return value
We now write the signal number in the error message if the program
terminated by a signal. The negative return value is constructed such that
after truncation to 8 bits it looks like a POSIX shell's $?:

   $ echo 0000 | { git upload-pack .; echo $? >&2; } | :
   error: git-upload-pack died of signal 13
   141

Previously, the exit code was 255 instead of 141.

Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-07-06 02:44:56 -07:00
Johannes Sixt
0ac77ec315 run_command: report system call errors instead of returning error codes
The motivation for this change is that system call failures are serious
errors that should be reported to the user, but only few callers took the
burden to decode the error codes that the functions returned into error
messages.

If at all, then only an unspecific error message was given. A prominent
example is this:

   $ git upload-pack . | :
   fatal: unable to run 'git-upload-pack'

In this example, git-upload-pack, the external command invoked through the
git wrapper, dies due to SIGPIPE, but the git wrapper does not bother to
report the real cause. In fact, this very error message is copied to the
syslog if git-daemon's client aborts the connection early.

With this change, system call failures are reported immediately after the
failure and only a generic failure code is returned to the caller. In the
above example the error is now to the point:

   $ git upload-pack . | :
   error: git-upload-pack died of signal

Note that there is no error report if the invoked program terminated with
a non-zero exit code, because it is reasonable to expect that the invoked
program has already reported an error. (But many run_command call sites
nevertheless write a generic error message.)

There was one special return code that was used to identify the case where
run_command failed because the requested program could not be exec'd. This
special case is now treated like a system call failure with errno set to
ENOENT. No error is reported in this case, because the call site in git.c
expects this as a normal result. Therefore, the callers that carefully
decoded the return value still check for this condition.

Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-07-06 02:44:49 -07:00
Johannes Sixt
5709e0363a run_command: return exit code as positive value
As a general guideline, functions in git's code return zero to indicate
success and negative values to indicate failure. The run_command family of
functions followed this guideline. But there are actually two different
kinds of failure:

- failures of system calls;

- non-zero exit code of the program that was run.

Usually, a non-zero exit code of the program is a failure and means a
failure to the caller. Except that sometimes it does not. For example, the
exit code of merge programs (e.g. external merge drivers) conveys
information about how the merge failed, and not all exit calls are
actually failures.

Furthermore, the return value of run_command is sometimes used as exit
code by the caller.

This change arranges that the exit code of the program is returned as a
positive value, which can now be regarded as the "result" of the function.
System call failures continue to be reported as negative values.

Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-07-05 12:16:27 -07:00
Thomas Rast
d824cbba02 Convert existing die(..., strerror(errno)) to die_errno()
Change calls to die(..., strerror(errno)) to use the new die_errno().

In the process, also make slight style adjustments: at least state
_something_ about the function that failed (instead of just printing
the pathname), and put paths in single quotes.

Signed-off-by: Thomas Rast <trast@student.ethz.ch>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-27 11:14:53 -07:00
Felipe Contreras
4b25d091ba Fix a bunch of pointer declarations (codestyle)
Essentially; s/type* /type */ as per the coding guidelines.

Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-05-01 15:17:31 -07:00
Junio C Hamano
1487eb68f7 Merge branch 'jk/maint-cleanup-after-exec-failure'
* jk/maint-cleanup-after-exec-failure:
  git: use run_command() to execute dashed externals
  run_command(): help callers distinguish errors
  run_command(): handle missing command errors more gracefully
  git: s/run_command/run_builtin/
2009-02-03 00:26:12 -08:00
Jeff King
45c0961c87 run_command(): handle missing command errors more gracefully
When run_command() was asked to run a non-existant command, its behavior
varied depending on the platform:

  - on POSIX systems, we would fork, and then after the execvp call
    failed, we could call die(), which prints a message to stderr and
    exits with code 128.

  - on Windows, we do a PATH lookup, realize the program isn't there, and
    then return ERR_RUN_COMMAND_FORK

The goal of this patch is to make it clear to callers that the specific
error was a missing command. To do this, we will return the error code
ERR_RUN_COMMAND_EXEC, which is already defined in run-command.h, checked
for in several places, but never actually gets set.

The new behavior is:

  - on POSIX systems, we exit the forked process with code 127 (the same
    as the shell uses to report missing commands). The parent process
    recognizes this code and returns an EXEC error. The stderr message is
    silenced, since the caller may be speculatively trying to run a
    command. Instead, we use trace_printf so that somebody interested in
    debugging can see the error that occured.

  - on Windows, we check errno, which is already set correctly by
    mingw_spawnvpe, and report an EXEC error instead of a FORK error

Thus it is safe to speculatively run a command:

  int r = run_command_v_opt(argv, 0);
  if (r == -ERR_RUN_COMMAND_EXEC)
	  /* oops, it wasn't found; try something else */
  else
	  /* we failed for some other reason, error is in r */

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-01-28 14:08:57 -08:00
Stephan Beyer
14e6298f12 run_hook(): allow more than 9 hook arguments
This is done using the ALLOC_GROW macro.

Signed-off-by: Stephan Beyer <s-beyer@gmx.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-01-17 17:57:15 -08:00