Commit Graph

190 Commits

Author SHA1 Message Date
Stefan Beller
d807c4a01d exec_cmd: rename to use dash in file name
This is more consistent with the project style. The majority of Git's
source files use dashes in preference to underscores in their file names.

Signed-off-by: Stefan Beller <sbeller@google.com>
2018-04-11 18:11:00 +09:00
René Scharfe
248f66ed8e run-command: use strbuf_addstr() for adding a string to a strbuf
Patch generated with Coccinelle and contrib/coccinelle/strbuf.cocci.

Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-25 09:49:15 -07:00
Nguyễn Thái Ngọc Duy
090a09272a run-command.c: print new cwd in trace_run_command()
If a command sets a new env variable GIT_DIR=.git, we need more context
to know where that '.git' is related to.

Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-01-19 10:49:20 -08:00
Nguyễn Thái Ngọc Duy
c61a975df1 run-command.c: print env vars in trace_run_command()
Occasionally submodule code could execute new commands with GIT_DIR set
to some submodule. GIT_TRACE prints just the command line which makes it
hard to tell that it's not really executed on this repository.

Print the env delta (compared to parent environment) in this case.

Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-01-19 10:49:20 -08:00
Nguyễn Thái Ngọc Duy
21dfc5e08f run-command.c: print program 'git' when tracing git_cmd mode
We normally print full command line, including the program and its
argument. When git_cmd is set, we have a special code path to run the
right "git" program and child_process.argv[0] will not contain the
program name anymore. As a result, we print just the command
arguments.

I thought it was a regression when the code was refactored and git_cmd
added, but apparently it's not. git_cmd mode was introduced before
tracing was added in 8852f5d704 (run_command(): respect GIT_TRACE -
2008-07-07) so it's more like an oversight in 8852f5d704.

Fix it, print the program name "git" in git_cmd mode. It's nice to have
now. But it will be more important later when we start to print env
variables too, in shell syntax. The lack of a program name would look
confusing then.

Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-01-19 10:49:20 -08:00
Nguyễn Thái Ngọc Duy
e73dd78699 run-command.c: introduce trace_run_command()
This is the same as the old code that uses trace_argv_printf() in
run-command.c. This function will be improved in later patches to
print more information from struct child_process.

Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-01-19 10:49:20 -08:00
Damien Marié
f805a00a39 run-command: add hint when a hook is ignored
When an hook is present but the file is not set as executable then git will
ignore the hook.
For now this is silent which can be confusing.

This commit adds this warning to improve the situation:

  hint: The 'pre-commit' hook was ignored because it's not set as executable.
  hint: You can disable this warning with `git config advice.ignoredHook false`

To allow the old use-case of enabling/disabling hooks via the executable flag a
new setting is introduced: advice.ignoredHook.

Signed-off-by: Damien Marié <damien@dam.io>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-10 13:21:46 +09:00
René Scharfe
0e187d758c run-command: use ALLOC_ARRAY
Use the macro ALLOC_ARRAY to allocate an array.  This is shorter and
easier, as it automatically infers the size of elements.

Patch generated with Coccinelle and contrib/coccinelle/array.cocci.

Signeg-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-03 08:42:57 +09:00
Junio C Hamano
0869277033 Merge branch 'js/run-process-parallel-api-fix' into maint
API fix.

* js/run-process-parallel-api-fix:
  run_processes_parallel: change confusing task_cb convention
2017-08-23 14:33:49 -07:00
Johannes Schindelin
c1e860f1dc run_processes_parallel: change confusing task_cb convention
By declaring the task_cb parameter of type `void **`, the signature of
the get_next_task method suggests that the "task-specific cookie" can be
defined in that method, and the signatures of the start_failure and of
the task_finished methods declare that parameter of type `void *`,
suggesting that those methods are mere users of said cookie.

That convention makes a total lot of sense, because the tasks are pretty
much dead when one of the latter two methods is called: there would be
little use to reset that cookie at that point because nobody would be
able to see the change afterwards.

However, this is not what the code actually does. For all three methods,
it passes the *address* of pp->children[i].data.

As reasoned above, this behavior makes no sense. So let's change the
implementation to adhere to the convention suggested by the signatures.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Acked-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-07-21 11:58:46 -07:00
Brandon Williams
940283101c run-command: restrict PATH search to executable files
In some situations run-command will incorrectly try (and fail) to
execute a directory instead of an executable file.  This was observed by
having a directory called "ssh" in $PATH before the real ssh and trying
to use ssh protoccol, reslting in the following:

	$ git ls-remote ssh://url
	fatal: cannot exec 'ssh': Permission denied

It ends up being worse and run-command will even try to execute a
non-executable file if it preceeds the executable version of a file on
the PATH.  For example, if PATH=~/bin1:~/bin2:~/bin3 and there exists a
directory 'git-hello' in 'bin1', a non-executable file 'git-hello' in
bin2 and an executable file 'git-hello' (which prints "Hello World!") in
bin3 the following will occur:

	$ git hello
	fatal: cannot exec 'git-hello': Permission denied

This is due to only checking 'access()' when locating an executable in
PATH, which doesn't distinguish between files and directories.  Instead
use 'is_executable()' which check that the path is to a regular,
executable file.  Now run-command won't try to execute the directory or
non-executable file 'git-hello':

	$ git hello
	Hello World!

which matches what execvp(3) would have done when asked to execute
git-hello with such a $PATH.

Reported-by: Brian Hatfield <bhatfield@google.com>
Signed-off-by: Brandon Williams <bmwill@google.com>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-25 23:17:36 -07:00
Brandon Williams
38124a40e4 run-command: expose is_executable function
Move the logic for 'is_executable()' from help.c to run_command.c and
expose it so that callers from outside help.c can access the function.
This is to enable run-command to be able to query if a file is
executable in a future patch.

Signed-off-by: Brandon Williams <bmwill@google.com>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-25 18:45:29 -07:00
Eric Wong
45afb1ca9c run-command: block signals between fork and execve
Signal handlers of the parent firing in the forked child may
have unintended side effects.  Rather than auditing every signal
handler we have and will ever have, block signals while forking
and restore default signal handlers in the child before execve.

Restoring default signal handlers is required because
execve does not unblock signals, it only restores default
signal handlers.  So we must restore them with sigprocmask
before execve, leaving a window when signal handlers
we control can fire in the child.  Continue ignoring
ignored signals, but reset the rest to defaults.

Similarly, disable pthread cancellation to future-proof our code
in case we start using cancellation; as cancellation is
implemented with signals in glibc.

Signed-off-by: Eric Wong <e@80x24.org>
Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-20 17:55:32 -07:00
Brandon Williams
e503cd6ed3 run-command: add note about forking and threading
All non-Async-Signal-Safe functions (e.g. malloc and die) were removed
between 'fork' and 'exec' in start_command in order to avoid potential
deadlocking when forking while multiple threads are running.  This
deadlocking is possible when a thread (other than the one forking) has
acquired a lock and didn't get around to releasing it before the fork.
This leaves the lock in a locked state in the resulting process with no
hope of it ever being released.

Add a note describing this potential pitfall before the call to 'fork()'
so people working in this section of the code know to only use
Async-Signal-Safe functions in the child process.

Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-20 17:55:32 -07:00
Brandon Williams
53fa6753b3 run-command: handle dup2 and close errors in child
Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-20 17:55:32 -07:00
Brandon Williams
79319b1949 run-command: eliminate calls to error handling functions in child
All of our standard error handling paths have the potential to
call malloc or take stdio locks; so we must avoid them inside
the forked child.

Instead, the child only writes an 8 byte struct atomically to
the parent through the notification pipe to propagate an error.
All user-visible error reporting happens from the parent;
even avoiding functions like atexit(3) and exit(3).

Helped-by: Eric Wong <e@80x24.org>
Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-20 17:55:32 -07:00
Brandon Williams
db015a284e run-command: don't die in child when duping /dev/null
Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-20 17:55:32 -07:00
Brandon Williams
ae25394b4c run-command: prepare child environment before forking
In order to avoid allocation between 'fork()' and 'exec()' prepare the
environment to be used in the child process prior to forking.

Switch to using 'execve()' so that the construct child environment can
used in the exec'd process.

Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-20 17:55:32 -07:00
Brandon Williams
e3a434468f run-command: use the async-signal-safe execv instead of execvp
Convert the function used to exec from 'execvp()' to 'execv()' as the (p)
variant of exec isn't async-signal-safe and has the potential to call malloc
during the path resolution it performs.  Instead we simply do the path
resolution ourselves during the preparation stage prior to forking.  There also
don't exist any portable (p) variants which also take in an environment to use
in the exec'd process.  This allows easy migration to using 'execve()' in a
future patch.

Also, as noted in [1], in the event of an ENOEXEC the (p) variants of
exec will attempt to execute the command by interpreting it with the
'sh' utility.  To maintain this functionality, if 'execv()' fails with
ENOEXEC, start_command will atempt to execute the command by
interpreting it with 'sh'.

[1] http://pubs.opengroup.org/onlinepubs/009695399/functions/exec.html

Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-20 17:55:32 -07:00
Brandon Williams
3967e25be1 run-command: prepare command before forking
According to [1] we need to only call async-signal-safe operations between fork
and exec.  Using malloc to build the argv array isn't async-signal-safe.

In order to avoid allocation between 'fork()' and 'exec()' prepare the
argv array used in the exec call prior to forking the process.

[1] http://pubs.opengroup.org/onlinepubs/009695399/functions/fork.html

Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-20 17:55:32 -07:00
Junio C Hamano
6756b58ebc Merge branch 'jk/execv-dashed-external'
Fix for NO_PTHREADS build.

* jk/execv-dashed-external:
  run-command: fix segfault when cleaning forked async process
2017-03-24 13:07:34 -07:00
Jeff King
7b91929ba0 run-command: fix segfault when cleaning forked async process
Callers of the run-command API may mark a child as
"clean_on_exit"; it gets added to a list and killed when the
main process dies.  Since commit 46df6906f
(execv_dashed_external: wait for child on signal death,
2017-01-06), we respect an extra "wait_after_clean" flag,
which we expect to find in the child_process struct.

When Git is built with NO_PTHREADS, we start "struct
async" processes by forking rather than spawning a thread.
The resulting processes get added to the cleanup list but
they don't have a child_process struct, and the cleanup
function ends up dereferencing NULL.

We should notice this case and assume that the processes do
not need to be waited for (i.e., the same behavior they had
before 46df6906f).

Reported-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Jeff King <peff@peff.net>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-18 10:29:15 -07:00
Junio C Hamano
cddbda4bc8 Merge branch 'js/mingw-hooks-with-exe-suffix'
Names of the various hook scripts must be spelled exactly, but on
Windows, an .exe binary must be named with .exe suffix; notice
$GIT_DIR/hooks/<hookname>.exe as a valid <hookname> hook.

* js/mingw-hooks-with-exe-suffix:
  mingw: allow hooks to be .exe files
2017-02-02 13:36:57 -08:00
Johannes Schindelin
235be51fbe mingw: allow hooks to be .exe files
Executable files in Windows need to have the extension '.exe', otherwise
they do not work. Extend the hooks to not just look at the hard coded
names, but also at the names extended by the custom STRIP_EXTENSION,
which is defined as '.exe' in Windows.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-01-30 08:49:43 -08:00
Jeff King
46df6906f3 execv_dashed_external: wait for child on signal death
When you hit ^C to interrupt a git command going to a pager,
this usually leaves the pager running. But when a dashed
external is in use, the pager ends up in a funny state and
quits (but only after eating one more character from the
terminal!). This fixes it.

Explaining the reason will require a little background.

When git runs a pager, it's important for the git process to
hang around and wait for the pager to finish, even though it
has no more data to feed it. This is because git spawns the
pager as a child, and thus the git process is the session
leader on the terminal. After it dies, the pager will finish
its current read from the terminal (eating the one
character), and then get EIO trying to read again.

When you hit ^C, that sends SIGINT to git and to the pager,
and it's a similar situation.  The pager ignores it, but the
git process needs to hang around until the pager is done. We
addressed that long ago in a3da882120 (pager: do
wait_for_pager on signal death, 2009-01-22).

But when you have a dashed external (or an alias pointing to
a builtin, which will re-exec git for the builtin), there's
an extra process in the mix. For instance, running:

  $ git -c alias.l=log l

will end up with a process tree like:

  git (parent)
    \
     git-log (child)
      \
       less (pager)

If you hit ^C, SIGINT goes to all of them. The pager ignores
it, and the child git process will end up in wait_for_pager().
But the parent git process will die, and the usual EIO
trouble happens.

So we really want the parent git process to wait_for_pager(),
but of course it doesn't know anything about the pager at
all, since it was started by the child.  However, we can
have it wait on the git-log child, which in turn is waiting
on the pager. And that's what this patch does.

There are a few design decisions here worth explaining:

  1. The new feature is attached to run-command's
     clean_on_exit feature. Partly this is convenience,
     since that feature already has a signal handler that
     deals with child cleanup.

     But it's also a meaningful connection. The main reason
     that dashed externals use clean_on_exit is to bind the
     two processes together. If somebody kills the parent
     with a signal, we propagate that to the child (in this
     instance with SIGINT, we do propagate but it doesn't
     matter because the original signal went to the whole
     process group). Likewise, we do not want the parent
     to go away until the child has done so.

     In a traditional Unix world, we'd probably accomplish
     this binding by just having the parent execve() the
     child directly. But since that doesn't work on Windows,
     everything goes through run_command's more spawn-like
     interface.

  2. We do _not_ automatically waitpid() on any
     clean_on_exit children. For dashed externals this makes
     sense; we know that the parent is doing nothing but
     waiting for the child to exit anyway. But with other
     children, it's possible that the child, after getting
     the signal, could be waiting on the parent to do
     something (like closing a descriptor). If we were to
     wait on such a child, we'd end up in a deadlock. So
     this errs on the side of caution, and lets callers
     enable the feature explicitly.

  3. When we send children the cleanup signal, we send all
     the signals first, before waiting on any children. This
     is to avoid the case where one child might be waiting
     on another one to exit, causing a deadlock. We inform
     all of them that it's time to die before reaping any.

     In practice, there is only ever one dashed external run
     from a given process, so this doesn't matter much now.
     But it future-proofs us if other callers start using
     the wait_after_clean mechanism.

There's no automated test here, because it would end up racy
and unportable. But it's easy to reproduce the situation by
running the log command given above and hitting ^C.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-01-09 13:41:40 -08:00
Lars Schneider
ac2fbaa674 run-command: add clean_on_exit_handler
Some processes might want to perform cleanup tasks before Git kills them
due to the 'clean_on_exit' flag. Let's give them an interface for doing
this. The feature is used in a subsequent patch.

Please note, that the cleanup callback is not executed if Git dies of a
signal. The reason is that only "async-signal-safe" functions would be
allowed to be call in that case. Since we cannot control what functions
the callback will use, we will not support the case. See 507d7804 for
more details.

Helped-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Lars Schneider <larsxschneider@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-10-17 11:36:50 -07:00
Lars Schneider
b992fe104e run-command: move check_pipe() from write_or_die to run_command
Move check_pipe() to run_command and make it public. This is necessary
to call the function from pkt-line in a subsequent patch.

While at it, make async_exit() static to run_command.c as it is no
longer used from outside.

Signed-off-by: Lars Schneider <larsxschneider@gmail.com>
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-10-17 11:36:49 -07:00
Junio C Hamano
d05d0e9966 Merge branch 'ab/hooks'
"git rev-parse --git-path hooks/<hook>" learned to take
core.hooksPath configuration variable (introduced during 2.9 cycle)
into account.

* ab/hooks:
  rev-parse: respect core.hooksPath in --git-path
2016-08-19 15:34:16 -07:00
Johannes Schindelin
9445b4921e rev-parse: respect core.hooksPath in --git-path
The idea of the --git-path option is not only to avoid having to
prefix paths with the output of --git-dir all the time, but also to
respect overrides for specific common paths inside the .git directory
(e.g. `git rev-parse --git-path objects` will report the value of the
environment variable GIT_OBJECT_DIRECTORY, if set).

When introducing the core.hooksPath setting, we forgot to adjust
git_path() accordingly. This patch fixes that.

While at it, revert the special-casing of core.hooksPath in
run-command.c, as it is now no longer needed.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-08-16 12:03:26 -07:00
Jeff King
96335bcf4d run-command: add pipe_command helper
We already have capture_command(), which captures the stdout
of a command in a way that avoids deadlocks. But sometimes
we need to do more I/O, like capturing stderr as well, or
sending data to stdin. It's easy to write code that
deadlocks racily in these situations depending on how fast
the command reads its input, or in which order it writes its
output.

Let's give callers an easy interface for doing this the
right way, similar to what capture_command() did for the
simple case.

The whole thing is backed by a generic poll() loop that can
feed an arbitrary number of buffers to descriptors, and fill
an arbitrary number of strbufs from other descriptors. This
seems like overkill, but the resulting code is actually a
bit cleaner than just handling the three descriptors
(because the output code for stdout/stderr is effectively
duplicated, so being able to loop is a benefit).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-06-17 17:03:56 -07:00
Junio C Hamano
40cfc95856 Merge branch 'nd/error-errno'
The code for warning_errno/die_errno has been refactored and a new
error_errno() reporting helper is introduced.

* nd/error-errno: (41 commits)
  wrapper.c: use warning_errno()
  vcs-svn: use error_errno()
  upload-pack.c: use error_errno()
  unpack-trees.c: use error_errno()
  transport-helper.c: use error_errno()
  sha1_file.c: use {error,die,warning}_errno()
  server-info.c: use error_errno()
  sequencer.c: use error_errno()
  run-command.c: use error_errno()
  rerere.c: use error_errno() and warning_errno()
  reachable.c: use error_errno()
  mailmap.c: use error_errno()
  ident.c: use warning_errno()
  http.c: use error_errno() and warning_errno()
  grep.c: use error_errno()
  gpg-interface.c: use error_errno()
  fast-import.c: use error_errno()
  entry.c: use error_errno()
  editor.c: use error_errno()
  diff-no-index.c: use error_errno()
  ...
2016-05-17 14:38:28 -07:00
Junio C Hamano
6675f501f6 Merge branch 'ab/hooks'
A new configuration variable core.hooksPath allows customizing
where the hook directory is.

* ab/hooks:
  hooks: allow customizing where the hook directory is
  githooks.txt: minor improvements to the grammar & phrasing
  githooks.txt: amend dangerous advice about 'update' hook ACL
  githooks.txt: improve the intro section
2016-05-17 14:38:17 -07:00
Nguyễn Thái Ngọc Duy
fbcb0e0659 run-command.c: use error_errno()
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-05-09 12:29:08 -07:00
Ævar Arnfjörð Bjarmason
867ad08a26 hooks: allow customizing where the hook directory is
Change the hardcoded lookup for .git/hooks/* to optionally lookup in
$(git config core.hooksPath)/* instead.

This is essentially a more intrusive version of the git-init ability to
specify hooks on init time via init templates.

The difference between that facility and this feature is that this can
be set up after the fact via e.g. ~/.gitconfig or /etc/gitconfig to
apply for all your personal repositories, or all repositories on the
system.

I plan on using this on a centralized Git server where users can create
arbitrary repositories under /gitroot, but I'd like to manage all the
hooks that should be run centrally via a unified dispatch mechanism.

Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-05-04 16:25:13 -07:00
Junio C Hamano
d689301043 Merge branch 'jk/push-client-deadlock-fix'
"git push" from a corrupt repository that attempts to push a large
number of refs deadlocked; the thread to relay rejection notices
for these ref updates blocked on writing them to the main thread,
after the main thread at the receiving end notices that the push
failed and decides not to read these notices and return a failure.

* jk/push-client-deadlock-fix:
  t5504: drop sigpipe=ok from push tests
  fetch-pack: isolate sigpipe in demuxer thread
  send-pack: isolate sigpipe in demuxer thread
  run-command: teach async threads to ignore SIGPIPE
  send-pack: close demux pipe before finishing async process
2016-04-29 12:59:08 -07:00
Jeff King
c792d7b6ce run-command: teach async threads to ignore SIGPIPE
Async processes can be implemented as separate forked
processes, or as threads (depending on the NO_PTHREADS
setting). In the latter case, if an async thread gets
SIGPIPE, it takes down the whole process. This is obviously
bad if the main process was not otherwise going to die, but
even if we were going to die, it means the main process does
not have a chance to report a useful error message.

There's also the small matter that forked async processes
will not take the main process down on a signal, meaning git
will behave differently depending on the NO_PTHREADS
setting.

This patch fixes it by adding a new flag to "struct async"
to block SIGPIPE just in the async thread. In theory, this
should always be on (which makes async threads behave more
like async processes), but we would first want to make sure
that each async process we spawn is careful about checking
return codes from write() and would not spew endlessly into
a dead pipe. So let's start with it as optional, and we can
enable it for specific sites in future patches.

The natural name for this option would be "ignore_sigpipe",
since that's what it does for the threaded case. But since
that name might imply that we are ignoring it in all cases
(including the separate-process one), let's call it
"isolate_sigpipe". What we are really asking for is
isolation. I.e., not to have our main process taken down by
signals spawned by the async process. How that is
implemented is up to the run-command code.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-04-20 13:33:53 -07:00
Junio C Hamano
bdebbeb334 Merge branch 'sb/submodule-parallel-update'
A major part of "git submodule update" has been ported to C to take
advantage of the recently added framework to run download tasks in
parallel.

* sb/submodule-parallel-update:
  clone: allow an explicit argument for parallel submodule clones
  submodule update: expose parallelism to the user
  submodule helper: remove double 'fatal: ' prefix
  git submodule update: have a dedicated helper for cloning
  run_processes_parallel: rename parameters for the callbacks
  run_processes_parallel: treat output of children as byte array
  submodule update: direct error message to stderr
  fetching submodules: respect `submodule.fetchJobs` config option
  submodule-config: drop check against NULL
  submodule-config: keep update strategy around
2016-04-06 11:39:01 -07:00
Junio C Hamano
b7a6ec609f Merge branch 'jk/tighten-alloc' into maint
* jk/tighten-alloc: (23 commits)
  compat/mingw: brown paper bag fix for 50a6c8e
  ewah: convert to REALLOC_ARRAY, etc
  convert ewah/bitmap code to use xmalloc
  diff_populate_gitlink: use a strbuf
  transport_anonymize_url: use xstrfmt
  git-compat-util: drop mempcpy compat code
  sequencer: simplify memory allocation of get_message
  test-path-utils: fix normalize_path_copy output buffer size
  fetch-pack: simplify add_sought_entry
  fast-import: simplify allocation in start_packfile
  write_untracked_extension: use FLEX_ALLOC helper
  prepare_{git,shell}_cmd: use argv_array
  use st_add and st_mult for allocation size computation
  convert trivial cases to FLEX_ARRAY macros
  use xmallocz to avoid size arithmetic
  convert trivial cases to ALLOC_ARRAY
  convert manual allocations to argv_array
  argv-array: add detach function
  add helpers for allocating flex-array structs
  harden REALLOC_ARRAY and xcalloc against size_t overflow
  ...
2016-03-10 11:13:43 -08:00
Junio C Hamano
bbe90e7950 Merge branch 'sb/submodule-parallel-fetch'
Simplify the two callback functions that are triggered when the
child process terminates to avoid misuse of the child-process
structure that has already been cleaned up.

* sb/submodule-parallel-fetch:
  run-command: do not pass child process data into callbacks
2016-03-04 13:46:30 -08:00
Stefan Beller
aa71049485 run_processes_parallel: rename parameters for the callbacks
The refs code has a similar pattern of passing around 'struct strbuf *err',
which is strictly used for error reporting. This is not the case here,
as the strbuf is used to accumulate all the output (whether it is error
or not) for the user. Rename it to 'out'.

Suggested-by: Jonathan Nieder <jrnieder@gmail.com>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-03-01 11:57:19 -08:00
Stefan Beller
2dac9b5637 run_processes_parallel: treat output of children as byte array
We do not want the output to be interrupted by a NUL byte, so we
cannot use raw fputs. Introduce strbuf_write to avoid having long
arguments in run-command.c.

Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-03-01 11:57:19 -08:00
Stefan Beller
2a73b3dad0 run-command: do not pass child process data into callbacks
The expected way to pass data into the callback is to pass them via
the customizable callback pointer. The error reporting in
default_{start_failure, task_finished} is not user friendly enough, that
we want to encourage using the child data for such purposes.

Furthermore the struct child data is cleaned by the run-command API,
before we access them in the callbacks, leading to use-after-free
situations.

Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-03-01 09:42:01 -08:00
Junio C Hamano
8ef250c559 Merge branch 'jk/epipe-in-async'
Handling of errors while writing into our internal asynchronous
process has been made more robust, which reduces flakiness in our
tests.

* jk/epipe-in-async:
  t5504: handle expected output from SIGPIPE death
  test_must_fail: report number of unexpected signal
  fetch-pack: ignore SIGPIPE in sideband demuxer
  write_or_die: handle EPIPE in async threads
2016-02-26 13:37:26 -08:00
Junio C Hamano
11529ecec9 Merge branch 'jk/tighten-alloc'
Update various codepaths to avoid manually-counted malloc().

* jk/tighten-alloc: (22 commits)
  ewah: convert to REALLOC_ARRAY, etc
  convert ewah/bitmap code to use xmalloc
  diff_populate_gitlink: use a strbuf
  transport_anonymize_url: use xstrfmt
  git-compat-util: drop mempcpy compat code
  sequencer: simplify memory allocation of get_message
  test-path-utils: fix normalize_path_copy output buffer size
  fetch-pack: simplify add_sought_entry
  fast-import: simplify allocation in start_packfile
  write_untracked_extension: use FLEX_ALLOC helper
  prepare_{git,shell}_cmd: use argv_array
  use st_add and st_mult for allocation size computation
  convert trivial cases to FLEX_ARRAY macros
  use xmallocz to avoid size arithmetic
  convert trivial cases to ALLOC_ARRAY
  convert manual allocations to argv_array
  argv-array: add detach function
  add helpers for allocating flex-array structs
  harden REALLOC_ARRAY and xcalloc against size_t overflow
  tree-diff: catch integer overflow in combine_diff_path allocation
  ...
2016-02-26 13:37:16 -08:00
Jeff King
9658846ce3 write_or_die: handle EPIPE in async threads
When write_or_die() sees EPIPE, it treats it specially by
converting it into a SIGPIPE death. We obviously cannot
ignore it, as the write has failed and the caller expects us
to die. But likewise, we cannot just call die(), because
printing any message at all would be a nuisance during
normal operations.

However, this is a problem if write_or_die() is called from
a thread. Our raised signal ends up killing the whole
process, when logically we just need to kill the thread
(after all, if we are ignoring SIGPIPE, there is good reason
to think that the main thread is expecting to handle it).

Inside an async thread, the die() code already does the
right thing, because we use our custom die_async() routine,
which calls pthread_join(). So ideally we would piggy-back
on that, and simply call:

  die_quietly_with_code(141);

or similar. But refactoring the die code to do this is
surprisingly non-trivial. The die_routines themselves handle
both printing and the decision of the exit code. Every one
of them would have to be modified to take new parameters for
the code, and to tell us to be quiet.

Instead, we can just teach write_or_die() to check for the
async case and handle it specially. We do have to build an
interface to abstract the async exit, but it's simple and
self-contained. If we had many call-sites that wanted to do
this die_quietly_with_code(), this approach wouldn't scale
as well, but we don't. This is the only place where do this
weird exit trick.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-25 13:51:45 -08:00
Jeff King
20574f551b prepare_{git,shell}_cmd: use argv_array
These functions transform an existing argv into one suitable
for exec-ing or spawning via git or a shell. We can use an
argv_array in each to avoid dealing with manual counting and
allocation.

This also makes the memory allocation more clear and fixes
some leaks. In prepare_shell_cmd, we would sometimes
allocate a new string with "$@" in it and sometimes not,
meaning the caller could not correctly free it. On the
non-Windows side, we are in a child process which will
exec() or exit() immediately, so the leak isn't a big deal.
On Windows, though, we use spawn() from the parent process,
and leak a string for each shell command we run. On top of
that, the Windows code did not free the allocated argv array
at all (but does for the prepare_git_cmd case!).

By switching both of these functions to write into an
argv_array, we can consistently free the result as
appropriate.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-22 14:51:09 -08:00
Junio C Hamano
5135d1c3d2 Merge branch 'nd/clear-gitenv-upon-use-of-alias'
d95138e6 (setup: set env $GIT_WORK_TREE when work tree is set, like
$GIT_DIR, 2015-06-26) attempted to work around a glitch in alias
handling by overwriting GIT_WORK_TREE environment variable to
affect subprocesses when set_git_work_tree() gets called, which
resulted in a rather unpleasant regression to "clone" and "init".
Try to address the same issue by always restoring the environment
and respawning the real underlying command when handling alias.

* nd/clear-gitenv-upon-use-of-alias:
  run-command: don't warn on SIGPIPE deaths
  git.c: make sure we do not leak GIT_* to alias scripts
  setup.c: re-fix d95138e (setup: set env $GIT_WORK_TREE when ..
  git.c: make it clear save_env() is for alias handling only
2016-01-20 11:43:26 -08:00
Jeff King
ac78663b0d run-command: don't warn on SIGPIPE deaths
When git executes a sub-command, we print a warning if the
command dies due to a signal, but make an exception for
"uninteresting" cases like SIGINT and SIGQUIT (since the
user presumably just hit ^C).

We should make a similar exception for SIGPIPE, because it's
an expected and uninteresting return in most cases; it
generally means the user quit the pager before git had
finished generating all output.  This used to be very hard
to trigger in practice, because:

  1. We only complain if we see a real SIGPIPE death, not
     the shell-induced 141 exit code. This means that
     anything we run via the shell does not trigger the
     warning, which includes most non-trivial aliases.

  2. The common case for SIGPIPE is the user quitting the
     pager before git has finished generating all output.
     But if the user triggers a pager with "-p", we redirect
     the git wrapper's stderr to that pager, too.  Since the
     pager is dead, it means that the message goes nowhere.

  3. You can see it if you run your own pager, like
     "git foo | head". But that only happens if "foo" is a
     non-builtin (so it doesn't work with "log", for
     example).

However, it may become more common after 86d26f2, which
teaches alias to re-exec builtins rather than running them
in the same process. This case doesn't trigger (1), as we
don't need a shell to run a git command. It doesn't trigger
(2), because the pager is not started by the original git,
but by the inner re-exec of git. And it doesn't trigger (3),
because builtins are treated more like non-builtins in this
case.

Given how flaky this message already is (e.g., you cannot
even know whether you will see it, as git optimizes out some
shell invocations behind the scenes based on the contents of
the command!), and that it is unlikely to ever provide
useful information, let's suppress it for all cases of
SIGPIPE.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-12-29 11:05:11 -08:00
Stefan Beller
c553c72eed run-command: add an asynchronous parallel child processor
This allows to run external commands in parallel with ordered output
on stderr.

If we run external commands in parallel we cannot pipe the output directly
to the our stdout/err as it would mix up. So each process's output will
flow through a pipe, which we buffer. One subprocess can be directly
piped to out stdout/err for a low latency feedback to the user.

Example:
Let's assume we have 5 submodules A,B,C,D,E and each fetch takes a
different amount of time as the different submodules vary in size, then
the output of fetches in sequential order might look like this:

 time -->
 output: |---A---| |-B-| |-------C-------| |-D-| |-E-|

When we schedule these submodules into maximal two parallel processes,
a schedule and sample output over time may look like this:

process 1: |---A---| |-D-| |-E-|

process 2: |-B-| |-------C-------|

output:    |---A---|B|---C-------|DE

So A will be perceived as it would run normally in the single child
version. As B has finished by the time A is done, we can dump its whole
progress buffer on stderr, such that it looks like it finished in no
time. Once that is done, C is determined to be the visible child and
its progress will be reported in real time.

So this way of output is really good for human consumption, as it only
changes the timing, not the actual output.

For machine consumption the output needs to be prepared in the tasks,
by either having a prefix per line or per block to indicate whose tasks
output is displayed, because the output order may not follow the
original sequential ordering:

 |----A----| |--B--| |-C-|

will be scheduled to be all parallel:

process 1: |----A----|
process 2: |--B--|
process 3: |-C-|
output:    |----A----|CB

This happens because C finished before B did, so it will be queued for
output before B.

To detect when a child has finished executing, we check interleaved
with other actions (such as checking the liveliness of children or
starting new processes) whether the stderr pipe still exists. Once a
child closed its stderr stream, we assume it is terminating very soon,
and use `finish_command()` from the single external process execution
interface to collect the exit status.

By maintaining the strong assumption of stderr being open until the
very end of a child process, we can avoid other hassle such as an
implementation using `waitpid(-1)`, which is not implemented in Windows.

Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-12-16 12:06:08 -08:00
Junio C Hamano
c3c592ef95 Merge branch 'rs/daemon-plug-child-leak'
"git daemon" uses "run_command()" without "finish_command()", so it
needs to release resources itself, which it forgot to do.

* rs/daemon-plug-child-leak:
  daemon: plug memory leak
  run-command: factor out child_process_clear()
2015-11-03 15:13:12 -08:00