Merge branch 'main' of github.com:git/git

* 'main' of github.com:git/git: (45 commits)
  A bit more of remaining topics before -rc1
  t1800: correct test to handle Cygwin
  chainlint: colorize problem annotations and test delimiters
  ls-files: fix black space in error message
  list-objects-filter: convert filter_spec to a strbuf
  list-objects-filter: add and use initializers
  list-objects-filter: handle null default filter spec
  list-objects-filter: don't memset after releasing filter struct
  builtin/mv.c: fix possible segfault in add_slash()
  Documentation/technical: include Scalar technical doc
  t/perf: add 'GIT_PERF_USE_SCALAR' run option
  t/perf: add Scalar performance tests
  scalar-clone: add test coverage
  scalar: add to 'git help -a' command list
  scalar: implement the `help` subcommand
  git help: special-case `scalar`
  scalar: include in standard Git build & installation
  scalar: fix command documentation section header
  t: retire unused chainlint.sed
  t/Makefile: teach `make test` and `make prove` to run chainlint.pl
  ...
This commit is contained in:
Jiang Xin 2022-09-21 08:13:27 +08:00
commit 2e2f4dd1e6
109 changed files with 2188 additions and 749 deletions

1
.gitignore vendored
View File

@ -185,6 +185,7 @@
/git-whatchanged
/git-worktree
/git-write-tree
/scalar
/git-core-*/?*
/git.res
/gitweb/GITWEB-BUILD-OPTIONS

View File

@ -21,6 +21,7 @@ MAN1_TXT += $(filter-out \
MAN1_TXT += git.txt
MAN1_TXT += gitk.txt
MAN1_TXT += gitweb.txt
MAN1_TXT += scalar.txt
# man5 / man7 guides (note: new guides should also be added to command-list.txt)
MAN5_TXT += gitattributes.txt
@ -116,6 +117,7 @@ TECH_DOCS += technical/parallel-checkout
TECH_DOCS += technical/partial-clone
TECH_DOCS += technical/racy-git
TECH_DOCS += technical/reftable
TECH_DOCS += technical/scalar
TECH_DOCS += technical/send-pack-pipeline
TECH_DOCS += technical/shallow
TECH_DOCS += technical/trivial-merge

View File

@ -6,7 +6,7 @@ UI, Workflows & Features
* "git remote show [-n] frotz" now pays attention to negative
pathspec.
* "git push" sometimes perform poorly when reachability bitmaps are
* "git push" sometimes performs poorly when reachability bitmaps are
used, even in a repository where other operations are helped by
bitmaps. The push.useBitmaps configuration variable is introduced
to allow disabling use of reachability bitmaps only for "git push".
@ -27,7 +27,7 @@ UI, Workflows & Features
what locale they are in by sending Accept-Language HTTP header, but
this was done only for some requests but not others.
* Introduce a discovery.barerepository configuration variable that
* Introduce a safe.barerepository configuration variable that
allows users to forbid discovery of bare repositories.
* Various messages that come from the pack-bitmap codepaths have been
@ -79,12 +79,15 @@ UI, Workflows & Features
* "git format-patch --from=<ident>" can be told to add an in-body
"From:" line even for commits that are authored by the given
<ident> with "--force-in-body-from"option.
<ident> with "--force-in-body-from" option.
* The built-in fsmonitor refuses to work on a network mounted
repositories; a configuration knob for users to override this has
been introduced.
* The "scalar" addition from Microsoft is now part of the core Git
installation.
Performance, Internal Implementation, Development Support etc.
@ -127,7 +130,7 @@ Performance, Internal Implementation, Development Support etc.
* The way "git multi-pack" uses parse-options API has been improved.
* A coccinelle rule (in contrib/) to encourage use of COPY_ARRAY
* A Coccinelle rule (in contrib/) to encourage use of COPY_ARRAY
macro has been improved.
* API tweak to make it easier to run fuzz testing on commit-graph parser.
@ -172,6 +175,12 @@ Performance, Internal Implementation, Development Support etc.
* Share the text used to explain configuration variables used by "git
<subcmd>" in "git help <subcmd>" with the text from "git help config".
* "git mv A B" in a sparsely populated working tree can be asked to
move a path from a directory that is "in cone" to another directory
that is "out of cone". Handling of such a case has been improved.
* The chainlint script for our tests has been revamped.
Fixes since v2.37
-----------------
@ -297,7 +306,7 @@ Fixes since v2.37
* "git fsck" reads mode from tree objects but canonicalizes the mode
before passing it to the logic to check object sanity, which has
hid broken tree objects from the checking logic. This has been
corrected, but to help exiting projects with broken tree objects
corrected, but to help existing projects with broken tree objects
that they cannot fix retroactively, the severity of anomalies this
code detects has been demoted to "info" for now.
@ -306,12 +315,10 @@ Fixes since v2.37
* An earlier optimization discarded a tree-object buffer that is
still in use, which has been corrected.
(merge 1490d7d82d jk/is-promisor-object-keep-tree-in-use later to maint).
* Fix deadlocks between main Git process and subprocess spawned via
the pipe_command() API, that can kill "git add -p" that was
reimplemented in C recently.
(merge 716c1f649e jk/pipe-command-nonblock later to maint).
* The sequencer machinery translated messages left in the reflog by
mistake, which has been corrected.
@ -319,20 +326,16 @@ Fixes since v2.37
* xcalloc(), imitating calloc(), takes "number of elements of the
array", and "size of a single element", in this order. A call that
does not follow this ordering has been corrected.
(merge c4bbd9bb8f sg/xcalloc-cocci-fix later to maint).
* The preload-index codepath made copies of pathspec to give to
multiple threads, which were left leaked.
(merge 23578904da ad/preload-plug-memleak later to maint).
* Update the version of Ubuntu used for GitHub Actions CI from 18.04
to 22.04.
(merge ef46584831 ds/github-actions-use-newer-ubuntu later to maint).
* The auto-stashed local changes created by "git merge --autostash"
was mixed into a conflicted state left in the working tree, which
has been corrected.
(merge d3a9295ada en/merge-unstash-only-on-clean-merge later to maint).
* Multi-pack index got corrupted when preferred pack changed from one
pack to another in a certain way, which has been corrected.

View File

@ -10,7 +10,7 @@ sub format_one {
$state = 0;
open I, '<', "$name.txt" or die "No such file $name.txt";
while (<I>) {
if (/^git[a-z0-9-]*\(([0-9])\)$/) {
if (/^(git|scalar)[a-z0-9-]*\(([0-9])\)$/) {
$mansection = $1;
next;
}

View File

@ -161,6 +161,6 @@ SEE ALSO
--------
linkgit:git-clone[1], linkgit:git-maintenance[1].
Scalar
GIT
---
Associated with the linkgit:git[1] suite
Part of the linkgit:git[1] suite

View File

@ -64,64 +64,3 @@ some "global" `git` options (e.g., `-c` and `-C`).
Because `scalar` is not invoked as a Git subcommand (like `git scalar`), it is
built and installed as its own executable in the `bin/` directory, alongside
`git`, `git-gui`, etc.
Roadmap
-------
NOTE: this section will be removed once the remaining tasks outlined in this
roadmap are complete.
Scalar is a large enough project that it is being upstreamed incrementally,
living in `contrib/` until it is feature-complete. So far, the following patch
series have been accepted:
- `scalar-the-beginning`: The initial patch series which sets up
`contrib/scalar/` and populates it with a minimal `scalar` command that
demonstrates the fundamental ideas.
- `scalar-c-and-C`: The `scalar` command learns about two options that can be
specified before the command, `-c <key>=<value>` and `-C <directory>`.
- `scalar-diagnose`: The `scalar` command is taught the `diagnose` subcommand.
- `scalar-generalize-diagnose`: Move the functionality of `scalar diagnose`
into `git diagnose` and `git bugreport --diagnose`.
- 'scalar-add-fsmonitor: Enable the built-in FSMonitor in Scalar
enlistments. At the end of this series, Scalar should be feature-complete
from the perspective of a user.
Roughly speaking (and subject to change), the following series are needed to
"finish" this initial version of Scalar:
- Move Scalar to toplevel: Move Scalar out of `contrib/` and into the root of
`git`. This includes a variety of related updates, including:
- building & installing Scalar in the Git root-level 'make [install]'.
- builing & testing Scalar as part of CI.
- moving and expanding test coverage of Scalar (including perf tests).
- implementing 'scalar help'/'git help scalar' to display scalar
documentation.
Finally, there are two additional patch series that exist in Microsoft's fork of
Git, but there is no current plan to upstream them. There are some interesting
ideas there, but the implementation is too specific to Azure Repos and/or VFS
for Git to be of much help in general.
These still exist mainly because the GVFS protocol is what Azure Repos has
instead of partial clone, while Git is focused on improving partial clone:
- `scalar-with-gvfs`: The primary purpose of this patch series is to support
existing Scalar users whose repositories are hosted in Azure Repos (which does
not support Git's partial clones, but supports its predecessor, the GVFS
protocol, which is used by Scalar to emulate the partial clone).
Since the GVFS protocol will never be supported by core Git, this patch series
will remain in Microsoft's fork of Git.
- `run-scalar-functional-tests`: The Scalar project developed a quite
comprehensive set of integration tests (or, "Functional Tests"). They are the
sole remaining part of the original C#-based Scalar project, and this patch
adds a GitHub workflow that runs them all.
Since the tests partially depend on features that are only provided in the
`scalar-with-gvfs` patch series, this patch cannot be upstreamed.

View File

@ -605,7 +605,9 @@ FUZZ_OBJS =
FUZZ_PROGRAMS =
GIT_OBJS =
LIB_OBJS =
SCALAR_OBJS =
OBJECTS =
OTHER_PROGRAMS =
PROGRAM_OBJS =
PROGRAMS =
EXCLUDED_PROGRAMS =
@ -819,10 +821,12 @@ BUILT_INS += git-switch$X
BUILT_INS += git-whatchanged$X
# what 'all' will build but not install in gitexecdir
OTHER_PROGRAMS = git$X
OTHER_PROGRAMS += git$X
OTHER_PROGRAMS += scalar$X
# what test wrappers are needed and 'install' will install, in bindir
BINDIR_PROGRAMS_NEED_X += git
BINDIR_PROGRAMS_NEED_X += scalar
BINDIR_PROGRAMS_NEED_X += git-receive-pack
BINDIR_PROGRAMS_NEED_X += git-shell
BINDIR_PROGRAMS_NEED_X += git-upload-archive
@ -2220,7 +2224,7 @@ profile-fast: profile-clean
all:: $(ALL_COMMANDS_TO_INSTALL) $(SCRIPT_LIB) $(OTHER_PROGRAMS) GIT-BUILD-OPTIONS
ifneq (,$X)
$(QUIET_BUILT_IN)$(foreach p,$(patsubst %$X,%,$(filter %$X,$(ALL_COMMANDS_TO_INSTALL) git$X)), test -d '$p' -o '$p' -ef '$p$X' || $(RM) '$p';)
$(QUIET_BUILT_IN)$(foreach p,$(patsubst %$X,%,$(filter %$X,$(ALL_COMMANDS_TO_INSTALL) $(OTHER_PROGRAMS))), test -d '$p' -o '$p' -ef '$p$X' || $(RM) '$p';)
endif
all::
@ -2543,7 +2547,12 @@ GIT_OBJS += git.o
.PHONY: git-objs
git-objs: $(GIT_OBJS)
SCALAR_OBJS += scalar.o
.PHONY: scalar-objs
scalar-objs: $(SCALAR_OBJS)
OBJECTS += $(GIT_OBJS)
OBJECTS += $(SCALAR_OBJS)
OBJECTS += $(PROGRAM_OBJS)
OBJECTS += $(TEST_OBJS)
OBJECTS += $(XDIFF_OBJS)
@ -2554,10 +2563,6 @@ ifndef NO_CURL
OBJECTS += http.o http-walker.o remote-curl.o
endif
SCALAR_SOURCES := contrib/scalar/scalar.c
SCALAR_OBJECTS := $(SCALAR_SOURCES:c=o)
OBJECTS += $(SCALAR_OBJECTS)
.PHONY: objects
objects: $(OBJECTS)
@ -2683,7 +2688,7 @@ $(REMOTE_CURL_PRIMARY): remote-curl.o http.o http-walker.o GIT-LDFLAGS $(GITLIBS
$(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) $(filter %.o,$^) \
$(CURL_LIBCURL) $(EXPAT_LIBEXPAT) $(LIBS)
contrib/scalar/scalar$X: $(SCALAR_OBJECTS) GIT-LDFLAGS $(GITLIBS)
scalar$X: scalar.o GIT-LDFLAGS $(GITLIBS)
$(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) \
$(filter %.o,$^) $(LIBS)
@ -2739,8 +2744,7 @@ XGETTEXT_FLAGS_SH = $(XGETTEXT_FLAGS) --language=Shell \
XGETTEXT_FLAGS_PERL = $(XGETTEXT_FLAGS) --language=Perl \
--keyword=__ --keyword=N__ --keyword="__n:1,2"
MSGMERGE_FLAGS = --add-location --backup=off --update
LOCALIZED_C = $(sort $(FOUND_C_SOURCES) $(FOUND_H_SOURCES) $(SCALAR_SOURCES) \
$(GENERATED_H))
LOCALIZED_C = $(sort $(FOUND_C_SOURCES) $(FOUND_H_SOURCES) $(GENERATED_H))
LOCALIZED_SH = $(sort $(SCRIPT_SH) git-sh-setup.sh)
LOCALIZED_PERL = $(sort $(SCRIPT_PERL))
@ -3054,7 +3058,7 @@ bin-wrappers/%: wrap-for-bin.sh
$(call mkdir_p_parent_template)
$(QUIET_GEN)sed -e '1s|#!.*/sh|#!$(SHELL_PATH_SQ)|' \
-e 's|@@BUILD_DIR@@|$(shell pwd)|' \
-e 's|@@PROG@@|$(patsubst test-%,t/helper/test-%$(X),$(@F))$(patsubst git%,$(X),$(filter $(@F),$(BINDIR_PROGRAMS_NEED_X)))|' < $< > $@ && \
-e 's|@@PROG@@|$(patsubst test-%,t/helper/test-%,$(@F))$(if $(filter-out $(BINDIR_PROGRAMS_NO_X),$(@F)),$(X),)|' < $< > $@ && \
chmod +x $@
# GNU make supports exporting all variables by "export" without parameters.
@ -3268,14 +3272,14 @@ ifndef NO_TCLTK
$(MAKE) -C git-gui gitexecdir='$(gitexec_instdir_SQ)' install
endif
ifneq (,$X)
$(foreach p,$(patsubst %$X,%,$(filter %$X,$(ALL_COMMANDS_TO_INSTALL) git$X)), test '$(DESTDIR_SQ)$(gitexec_instdir_SQ)/$p' -ef '$(DESTDIR_SQ)$(gitexec_instdir_SQ)/$p$X' || $(RM) '$(DESTDIR_SQ)$(gitexec_instdir_SQ)/$p';)
$(foreach p,$(patsubst %$X,%,$(filter %$X,$(ALL_COMMANDS_TO_INSTALL) $(OTHER_PROGRAMS))), test '$(DESTDIR_SQ)$(gitexec_instdir_SQ)/$p' -ef '$(DESTDIR_SQ)$(gitexec_instdir_SQ)/$p$X' || $(RM) '$(DESTDIR_SQ)$(gitexec_instdir_SQ)/$p';)
endif
bindir=$$(cd '$(DESTDIR_SQ)$(bindir_SQ)' && pwd) && \
execdir=$$(cd '$(DESTDIR_SQ)$(gitexec_instdir_SQ)' && pwd) && \
destdir_from_execdir_SQ=$$(echo '$(gitexecdir_relative_SQ)' | sed -e 's|[^/][^/]*|..|g') && \
{ test "$$bindir/" = "$$execdir/" || \
for p in git$X $(filter $(install_bindir_programs),$(ALL_PROGRAMS)); do \
for p in $(OTHER_PROGRAMS) $(filter $(install_bindir_programs),$(ALL_PROGRAMS)); do \
$(RM) "$$execdir/$$p" && \
test -n "$(INSTALL_SYMLINKS)" && \
ln -s "$$destdir_from_execdir_SQ/$(bindir_relative_SQ)/$$p" "$$execdir/$$p" || \
@ -3450,7 +3454,7 @@ clean: profile-clean coverage-clean cocciclean
$(RM) git.res
$(RM) $(OBJECTS)
$(RM) $(LIB_FILE) $(XDIFF_LIB) $(REFTABLE_LIB) $(REFTABLE_TEST_LIB)
$(RM) $(ALL_PROGRAMS) $(SCRIPT_LIB) $(BUILT_INS) git$X
$(RM) $(ALL_PROGRAMS) $(SCRIPT_LIB) $(BUILT_INS) $(OTHER_PROGRAMS)
$(RM) $(TEST_PROGRAMS)
$(RM) $(FUZZ_PROGRAMS)
$(RM) $(SP_OBJ)
@ -3501,6 +3505,7 @@ ALL_COMMANDS += git-citool
ALL_COMMANDS += git-gui
ALL_COMMANDS += gitk
ALL_COMMANDS += gitweb
ALL_COMMANDS += scalar
.PHONY: check-docs
check-docs::

View File

@ -261,3 +261,22 @@ void detach_advice(const char *new_name)
fprintf(stderr, fmt, new_name);
}
void advise_on_moving_dirty_path(struct string_list *pathspec_list)
{
struct string_list_item *item;
if (!pathspec_list->nr)
return;
fprintf(stderr, _("The following paths have been moved outside the\n"
"sparse-checkout definition but are not sparse due to local\n"
"modifications.\n"));
for_each_string_list_item(item, pathspec_list)
fprintf(stderr, "%s\n", item->string);
advise_if_enabled(ADVICE_UPDATE_SPARSE_PATH,
_("To correct the sparsity of these paths, do the following:\n"
"* Use \"git add --sparse <paths>\" to update the index\n"
"* Use \"git sparse-checkout reapply\" to apply the sparsity rules"));
}

View File

@ -74,5 +74,6 @@ void NORETURN die_conclude_merge(void);
void NORETURN die_ff_impossible(void);
void advise_on_updating_sparse_paths(struct string_list *pathspec_list);
void detach_advice(const char *new_name);
void advise_on_moving_dirty_path(struct string_list *pathspec_list);
#endif /* ADVICE_H */

View File

@ -73,7 +73,7 @@ static struct string_list option_optional_reference = STRING_LIST_INIT_NODUP;
static int option_dissociate;
static int max_jobs = -1;
static struct string_list option_recurse_submodules = STRING_LIST_INIT_NODUP;
static struct list_objects_filter_options filter_options;
static struct list_objects_filter_options filter_options = LIST_OBJECTS_FILTER_INIT;
static int option_filter_submodules = -1; /* unspecified */
static int config_filter_submodules = -1; /* unspecified */
static struct string_list server_options = STRING_LIST_INIT_NODUP;

View File

@ -62,6 +62,7 @@ int cmd_fetch_pack(int argc, const char **argv, const char *prefix)
packet_trace_identity("fetch-pack");
memset(&args, 0, sizeof(args));
list_objects_filter_init(&args.filter_options);
args.uploadpack = "git-upload-pack";
for (i = 1; i < argc && *argv[i] == '-'; i++) {

View File

@ -80,7 +80,7 @@ static int recurse_submodules_cli = RECURSE_SUBMODULES_DEFAULT;
static int recurse_submodules_default = RECURSE_SUBMODULES_ON_DEMAND;
static int shown_url = 0;
static struct refspec refmap = REFSPEC_INIT_FETCH;
static struct list_objects_filter_options filter_options;
static struct list_objects_filter_options filter_options = LIST_OBJECTS_FILTER_INIT;
static struct string_list server_options = STRING_LIST_INIT_DUP;
static struct string_list negotiation_tip = STRING_LIST_INIT_NODUP;
static int fetch_write_commit_graph = -1;

View File

@ -440,6 +440,8 @@ static const char *cmd_to_page(const char *git_cmd)
return git_cmd;
else if (is_git_command(git_cmd))
return xstrfmt("git-%s", git_cmd);
else if (!strcmp("scalar", git_cmd))
return xstrdup(git_cmd);
else
return xstrfmt("git%s", git_cmd);
}

View File

@ -257,7 +257,7 @@ static size_t expand_show_index(struct strbuf *sb, const char *start,
end = strchr(start + 1, ')');
if (!end)
die(_("bad ls-files format: element '%s'"
die(_("bad ls-files format: element '%s' "
"does not end in ')'"), start);
len = end - start + 1;

View File

@ -21,7 +21,6 @@ static const char * const builtin_mv_usage[] = {
};
enum update_mode {
BOTH = 0,
WORKING_DIRECTORY = (1 << 1),
INDEX = (1 << 2),
SPARSE = (1 << 3),
@ -72,7 +71,7 @@ static const char **internal_prefix_pathspec(const char *prefix,
static const char *add_slash(const char *path)
{
size_t len = strlen(path);
if (path[len - 1] != '/') {
if (len && path[len - 1] != '/') {
char *with_slash = xmalloc(st_add(len, 2));
memcpy(with_slash, path, len);
with_slash[len++] = '/';
@ -125,16 +124,15 @@ static int index_range_of_same_dir(const char *src, int length,
}
/*
* Check if an out-of-cone directory should be in the index. Imagine this case
* that all the files under a directory are marked with 'CE_SKIP_WORKTREE' bit
* and thus the directory is sparsified.
*
* Return 0 if such directory exist (i.e. with any of its contained files not
* marked with CE_SKIP_WORKTREE, the directory would be present in working tree).
* Return 1 otherwise.
* Given the path of a directory that does not exist on-disk, check whether the
* directory contains any entries in the index with the SKIP_WORKTREE flag
* enabled.
* Return 1 if such index entries exist.
* Return 0 otherwise.
*/
static int check_dir_in_index(const char *name)
static int empty_dir_has_sparse_contents(const char *name)
{
int ret = 0;
const char *with_slash = add_slash(name);
int length = strlen(with_slash);
@ -144,14 +142,18 @@ static int check_dir_in_index(const char *name)
if (pos < 0) {
pos = -pos - 1;
if (pos >= the_index.cache_nr)
return 1;
goto free_return;
ce = active_cache[pos];
if (strncmp(with_slash, ce->name, length))
return 1;
goto free_return;
if (ce_skip_worktree(ce))
return 0;
ret = 1;
}
return 1;
free_return:
if (with_slash != name)
free((char *)with_slash);
return ret;
}
int cmd_mv(int argc, const char **argv, const char *prefix)
@ -168,12 +170,17 @@ int cmd_mv(int argc, const char **argv, const char *prefix)
OPT_END(),
};
const char **source, **destination, **dest_path, **submodule_gitfile;
enum update_mode *modes;
const char *dst_w_slash;
const char **src_dir = NULL;
int src_dir_nr = 0, src_dir_alloc = 0;
struct strbuf a_src_dir = STRBUF_INIT;
enum update_mode *modes, dst_mode = 0;
struct stat st;
struct string_list src_for_dst = STRING_LIST_INIT_NODUP;
struct lock_file lock_file = LOCK_INIT;
struct cache_entry *ce;
struct string_list only_match_skip_worktree = STRING_LIST_INIT_NODUP;
struct string_list dirty_paths = STRING_LIST_INIT_NODUP;
git_config(git_default_config, NULL);
@ -198,6 +205,7 @@ int cmd_mv(int argc, const char **argv, const char *prefix)
if (argc == 1 && is_directory(argv[0]) && !is_directory(argv[1]))
flags = 0;
dest_path = internal_prefix_pathspec(prefix, argv + argc, 1, flags);
dst_w_slash = add_slash(dest_path[0]);
submodule_gitfile = xcalloc(argc, sizeof(char *));
if (dest_path[0][0] == '\0')
@ -205,12 +213,31 @@ int cmd_mv(int argc, const char **argv, const char *prefix)
destination = internal_prefix_pathspec(dest_path[0], argv, argc, DUP_BASENAME);
else if (!lstat(dest_path[0], &st) &&
S_ISDIR(st.st_mode)) {
dest_path[0] = add_slash(dest_path[0]);
destination = internal_prefix_pathspec(dest_path[0], argv, argc, DUP_BASENAME);
destination = internal_prefix_pathspec(dst_w_slash, argv, argc, DUP_BASENAME);
} else {
if (argc != 1)
if (!path_in_sparse_checkout(dst_w_slash, &the_index) &&
empty_dir_has_sparse_contents(dst_w_slash)) {
destination = internal_prefix_pathspec(dst_w_slash, argv, argc, DUP_BASENAME);
dst_mode = SKIP_WORKTREE_DIR;
} else if (argc != 1) {
die(_("destination '%s' is not a directory"), dest_path[0]);
destination = dest_path;
} else {
destination = dest_path;
/*
* <destination> is a file outside of sparse-checkout
* cone. Insist on cone mode here for backward
* compatibility. We don't want dst_mode to be assigned
* for a file when the repo is using no-cone mode (which
* is deprecated at this point) sparse-checkout. As
* SPARSE here is only considering cone-mode situation.
*/
if (!path_in_cone_mode_sparse_checkout(destination[0], &the_index))
dst_mode = SPARSE;
}
}
if (dst_w_slash != dest_path[0]) {
free((char *)dst_w_slash);
dst_w_slash = NULL;
}
/* Checking */
@ -232,7 +259,7 @@ int cmd_mv(int argc, const char **argv, const char *prefix)
if (pos < 0) {
const char *src_w_slash = add_slash(src);
if (!path_in_sparse_checkout(src_w_slash, &the_index) &&
!check_dir_in_index(src)) {
empty_dir_has_sparse_contents(src)) {
modes[i] |= SKIP_WORKTREE_DIR;
goto dir_check;
}
@ -290,6 +317,10 @@ dir_check:
/* last - first >= 1 */
modes[i] |= WORKING_DIRECTORY;
ALLOC_GROW(src_dir, src_dir_nr + 1, src_dir_alloc);
src_dir[src_dir_nr++] = src;
n = argc + last - first;
REALLOC_ARRAY(source, n);
REALLOC_ARRAY(destination, n);
@ -346,6 +377,18 @@ dir_check:
goto act_on_entry;
}
if (ignore_sparse &&
(dst_mode & (SKIP_WORKTREE_DIR | SPARSE)) &&
index_entry_exists(&the_index, dst, strlen(dst))) {
bad = _("destination exists in the index");
if (force) {
if (verbose)
warning(_("overwriting '%s'"), dst);
bad = NULL;
} else {
goto act_on_entry;
}
}
/*
* We check if the paths are in the sparse-checkout
* definition as a very final check, since that
@ -396,6 +439,7 @@ remove_entry:
const char *src = source[i], *dst = destination[i];
enum update_mode mode = modes[i];
int pos;
int sparse_and_dirty = 0;
struct checkout state = CHECKOUT_INIT;
state.istate = &the_index;
@ -406,6 +450,7 @@ remove_entry:
if (show_only)
continue;
if (!(mode & (INDEX | SPARSE | SKIP_WORKTREE_DIR)) &&
!(dst_mode & (SKIP_WORKTREE_DIR | SPARSE)) &&
rename(src, dst) < 0) {
if (ignore_errors)
continue;
@ -425,20 +470,81 @@ remove_entry:
pos = cache_name_pos(src, strlen(src));
assert(pos >= 0);
if (!(mode & SPARSE) && !lstat(src, &st))
sparse_and_dirty = ce_modified(active_cache[pos], &st, 0);
rename_cache_entry_at(pos, dst);
if ((mode & SPARSE) &&
(path_in_sparse_checkout(dst, &the_index))) {
int dst_pos;
if (ignore_sparse &&
core_apply_sparse_checkout &&
core_sparse_checkout_cone) {
/*
* NEEDSWORK: we are *not* paying attention to
* "out-to-out" move (<source> is out-of-cone and
* <destination> is out-of-cone) at this point. It
* should be added in a future patch.
*/
if ((mode & SPARSE) &&
path_in_sparse_checkout(dst, &the_index)) {
/* from out-of-cone to in-cone */
int dst_pos = cache_name_pos(dst, strlen(dst));
struct cache_entry *dst_ce = active_cache[dst_pos];
dst_pos = cache_name_pos(dst, strlen(dst));
active_cache[dst_pos]->ce_flags &= ~CE_SKIP_WORKTREE;
dst_ce->ce_flags &= ~CE_SKIP_WORKTREE;
if (checkout_entry(active_cache[dst_pos], &state, NULL, NULL))
die(_("cannot checkout %s"), active_cache[dst_pos]->name);
if (checkout_entry(dst_ce, &state, NULL, NULL))
die(_("cannot checkout %s"), dst_ce->name);
} else if ((dst_mode & (SKIP_WORKTREE_DIR | SPARSE)) &&
!(mode & SPARSE) &&
!path_in_sparse_checkout(dst, &the_index)) {
/* from in-cone to out-of-cone */
int dst_pos = cache_name_pos(dst, strlen(dst));
struct cache_entry *dst_ce = active_cache[dst_pos];
/*
* if src is clean, it will suffice to remove it
*/
if (!sparse_and_dirty) {
dst_ce->ce_flags |= CE_SKIP_WORKTREE;
unlink_or_warn(src);
} else {
/*
* if src is dirty, move it to the
* destination and create leading
* dirs if necessary
*/
char *dst_dup = xstrdup(dst);
string_list_append(&dirty_paths, dst);
safe_create_leading_directories(dst_dup);
FREE_AND_NULL(dst_dup);
rename(src, dst);
}
}
}
}
/*
* cleanup the empty src_dirs
*/
for (i = 0; i < src_dir_nr; i++) {
int dummy;
strbuf_addstr(&a_src_dir, src_dir[i]);
/*
* if entries under a_src_dir are all moved away,
* recursively remove a_src_dir to cleanup
*/
if (index_range_of_same_dir(a_src_dir.buf, a_src_dir.len,
&dummy, &dummy) < 1) {
remove_dir_recursively(&a_src_dir, 0);
}
strbuf_reset(&a_src_dir);
}
strbuf_release(&a_src_dir);
free(src_dir);
if (dirty_paths.nr)
advise_on_moving_dirty_path(&dirty_paths);
if (gitmodules_modified)
stage_updated_gitmodules(&the_index);
@ -447,6 +553,7 @@ remove_entry:
die(_("Unable to write new index file"));
string_list_clear(&src_for_dst, 0);
string_list_clear(&dirty_paths, 0);
UNLEAK(source);
UNLEAK(dest_path);
free(submodule_gitfile);

View File

@ -1746,8 +1746,10 @@ static int module_clone(int argc, const char **argv, const char *prefix)
{
int dissociate = 0, quiet = 0, progress = 0, require_init = 0;
struct module_clone_data clone_data = MODULE_CLONE_DATA_INIT;
struct list_objects_filter_options filter_options = { 0 };
struct string_list reference = STRING_LIST_INIT_NODUP;
struct list_objects_filter_options filter_options =
LIST_OBJECTS_FILTER_INIT;
struct option module_clone_options[] = {
OPT_STRING(0, "prefix", &clone_data.prefix,
N_("path"),
@ -2620,7 +2622,8 @@ static int module_update(int argc, const char **argv, const char *prefix)
struct pathspec pathspec = { 0 };
struct pathspec pathspec2 = { 0 };
struct update_data opt = UPDATE_DATA_INIT;
struct list_objects_filter_options filter_options = { 0 };
struct list_objects_filter_options filter_options =
LIST_OBJECTS_FILTER_INIT;
int ret;
struct option module_update_options[] = {
OPT__FORCE(&opt.force, N_("force checkout updates"), 0),

View File

@ -18,6 +18,7 @@ struct bundle_header {
{ \
.prerequisites = STRING_LIST_INIT_DUP, \
.references = STRING_LIST_INIT_DUP, \
.filter = LIST_OBJECTS_FILTER_INIT, \
}
void bundle_header_init(struct bundle_header *header);
void bundle_header_release(struct bundle_header *header);

View File

@ -235,3 +235,4 @@ gittutorial guide
gittutorial-2 guide
gitweb ancillaryinterrogators
gitworkflows guide
scalar mainporcelain

View File

@ -610,7 +610,7 @@ unset(CMAKE_REQUIRED_INCLUDES)
#programs
set(PROGRAMS_BUILT
git git-daemon git-http-backend git-sh-i18n--envsubst
git-shell)
git-shell scalar)
if(NOT CURL_FOUND)
list(APPEND excluded_progs git-http-fetch git-http-push)
@ -757,6 +757,9 @@ target_link_libraries(git-sh-i18n--envsubst common-main)
add_executable(git-shell ${CMAKE_SOURCE_DIR}/shell.c)
target_link_libraries(git-shell common-main)
add_executable(scalar ${CMAKE_SOURCE_DIR}/scalar.c)
target_link_libraries(scalar common-main)
if(CURL_FOUND)
add_library(http_obj OBJECT ${CMAKE_SOURCE_DIR}/http.c)
@ -903,7 +906,7 @@ list(TRANSFORM git_perl_scripts PREPEND "${CMAKE_BINARY_DIR}/")
#install
foreach(program ${PROGRAMS_BUILT})
if(program STREQUAL "git" OR program STREQUAL "git-shell")
if(program MATCHES "^(git|git-shell|scalar)$")
install(TARGETS ${program}
RUNTIME DESTINATION bin)
else()
@ -977,7 +980,7 @@ endif()
#wrapper scripts
set(wrapper_scripts
git git-upload-pack git-receive-pack git-upload-archive git-shell git-remote-ext)
git git-upload-pack git-receive-pack git-upload-archive git-shell git-remote-ext scalar)
set(wrapper_test_scripts
test-fake-ssh test-tool)
@ -1076,7 +1079,7 @@ if(NOT ${CMAKE_BINARY_DIR}/CMakeCache.txt STREQUAL ${CACHE_PATH})
"string(REPLACE \"\${GIT_BUILD_DIR_REPL}\" \"GIT_BUILD_DIR=\\\"$TEST_DIRECTORY/../${BUILD_DIR_RELATIVE}\\\"\" content \"\${content}\")\n"
"file(WRITE ${CMAKE_SOURCE_DIR}/t/test-lib.sh \${content})")
#misc copies
file(COPY ${CMAKE_SOURCE_DIR}/t/chainlint.sed DESTINATION ${CMAKE_BINARY_DIR}/t/)
file(COPY ${CMAKE_SOURCE_DIR}/t/chainlint.pl DESTINATION ${CMAKE_BINARY_DIR}/t/)
file(COPY ${CMAKE_SOURCE_DIR}/po/is.po DESTINATION ${CMAKE_BINARY_DIR}/po/)
file(COPY ${CMAKE_SOURCE_DIR}/mergetools/tkdiff DESTINATION ${CMAKE_BINARY_DIR}/mergetools/)
file(COPY ${CMAKE_SOURCE_DIR}/contrib/completion/git-prompt.sh DESTINATION ${CMAKE_BINARY_DIR}/contrib/completion/)

View File

@ -1,2 +0,0 @@
/*.exe
/scalar

View File

@ -1,35 +0,0 @@
# The default target of this Makefile is...
all::
# Import tree-wide shared Makefile behavior and libraries
include ../../shared.mak
include ../../config.mak.uname
-include ../../config.mak.autogen
-include ../../config.mak
TARGETS = scalar$(X) scalar.o
GITLIBS = ../../common-main.o ../../libgit.a ../../xdiff/lib.a
all:: scalar$(X) ../../bin-wrappers/scalar
$(GITLIBS):
$(QUIET_SUBDIR0)../.. $(QUIET_SUBDIR1) $(subst ../../,,$@)
$(TARGETS): $(GITLIBS) scalar.c
$(QUIET_SUBDIR0)../.. $(QUIET_SUBDIR1) $(patsubst %,contrib/scalar/%,$@)
clean:
$(RM) $(TARGETS) ../../bin-wrappers/scalar
../../bin-wrappers/scalar: ../../wrap-for-bin.sh Makefile
@mkdir -p ../../bin-wrappers
$(QUIET_GEN)sed -e '1s|#!.*/sh|#!$(SHELL_PATH_SQ)|' \
-e 's|@@BUILD_DIR@@|$(shell cd ../.. && pwd)|' \
-e 's|@@PROG@@|contrib/scalar/scalar$(X)|' < $< > $@ && \
chmod +x $@
test: all
$(MAKE) -C t
.PHONY: $(GITLIBS) all clean test FORCE

View File

@ -1,81 +0,0 @@
# Import tree-wide shared Makefile behavior and libraries
include ../../../shared.mak
# Run scalar tests
#
# Copyright (c) 2005,2021 Junio C Hamano, Johannes Schindelin
#
-include ../../../config.mak.autogen
-include ../../../config.mak
SHELL_PATH ?= $(SHELL)
PERL_PATH ?= /usr/bin/perl
RM ?= rm -f
PROVE ?= prove
DEFAULT_TEST_TARGET ?= test
TEST_LINT ?= test-lint
ifdef TEST_OUTPUT_DIRECTORY
TEST_RESULTS_DIRECTORY = $(TEST_OUTPUT_DIRECTORY)/test-results
else
TEST_RESULTS_DIRECTORY = ../../../t/test-results
endif
# Shell quote;
SHELL_PATH_SQ = $(subst ','\'',$(SHELL_PATH))
PERL_PATH_SQ = $(subst ','\'',$(PERL_PATH))
TEST_RESULTS_DIRECTORY_SQ = $(subst ','\'',$(TEST_RESULTS_DIRECTORY))
T = $(sort $(wildcard t[0-9][0-9][0-9][0-9]-*.sh))
all: $(DEFAULT_TEST_TARGET)
test: $(TEST_LINT)
$(MAKE) aggregate-results-and-cleanup
prove: $(TEST_LINT)
@echo "*** prove ***"; GIT_CONFIG=.git/config $(PROVE) --exec '$(SHELL_PATH_SQ)' $(GIT_PROVE_OPTS) $(T) :: $(GIT_TEST_OPTS)
$(MAKE) clean-except-prove-cache
$(T):
@echo "*** $@ ***"; GIT_CONFIG=.git/config '$(SHELL_PATH_SQ)' $@ $(GIT_TEST_OPTS)
clean-except-prove-cache:
$(RM) -r 'trash directory'.*
$(RM) -r valgrind/bin
clean: clean-except-prove-cache
$(RM) .prove
test-lint: test-lint-duplicates test-lint-executable test-lint-shell-syntax
test-lint-duplicates:
@dups=`echo $(T) | tr ' ' '\n' | sed 's/-.*//' | sort | uniq -d` && \
test -z "$$dups" || { \
echo >&2 "duplicate test numbers:" $$dups; exit 1; }
test-lint-executable:
@bad=`for i in $(T); do test -x "$$i" || echo $$i; done` && \
test -z "$$bad" || { \
echo >&2 "non-executable tests:" $$bad; exit 1; }
test-lint-shell-syntax:
@'$(PERL_PATH_SQ)' ../../../t/check-non-portable-shell.pl $(T)
aggregate-results-and-cleanup: $(T)
$(MAKE) aggregate-results
$(MAKE) clean
aggregate-results:
for f in '$(TEST_RESULTS_DIRECTORY_SQ)'/t*-*.counts; do \
echo "$$f"; \
done | '$(SHELL_PATH_SQ)' ../../../t/aggregate-results.sh
valgrind:
$(MAKE) GIT_TEST_OPTS="$(GIT_TEST_OPTS) --valgrind"
test-results:
mkdir -p test-results
.PHONY: $(T) aggregate-results clean valgrind

View File

@ -108,7 +108,7 @@ int gently_parse_list_objects_filter(
strbuf_addf(errbuf, _("invalid filter-spec '%s'"), arg);
memset(filter_options, 0, sizeof(*filter_options));
list_objects_filter_init(filter_options);
return 1;
}
@ -187,10 +187,8 @@ static int parse_combine_filter(
cleanup:
strbuf_list_free(subspecs);
if (result) {
if (result)
list_objects_filter_release(filter_options);
memset(filter_options, 0, sizeof(*filter_options));
}
return result;
}
@ -204,10 +202,10 @@ static int allow_unencoded(char ch)
static void filter_spec_append_urlencode(
struct list_objects_filter_options *filter, const char *raw)
{
struct strbuf buf = STRBUF_INIT;
strbuf_addstr_urlencode(&buf, raw, allow_unencoded);
trace_printf("Add to combine filter-spec: %s\n", buf.buf);
string_list_append_nodup(&filter->filter_spec, strbuf_detach(&buf, NULL));
size_t orig_len = filter->filter_spec.len;
strbuf_addstr_urlencode(&filter->filter_spec, raw, allow_unencoded);
trace_printf("Add to combine filter-spec: %s\n",
filter->filter_spec.buf + orig_len);
}
/*
@ -225,14 +223,13 @@ static void transform_to_combine_type(
struct list_objects_filter_options *sub_array =
xcalloc(initial_sub_alloc, sizeof(*sub_array));
sub_array[0] = *filter_options;
memset(filter_options, 0, sizeof(*filter_options));
string_list_init_dup(&filter_options->filter_spec);
list_objects_filter_init(filter_options);
filter_options->sub = sub_array;
filter_options->sub_alloc = initial_sub_alloc;
}
filter_options->sub_nr = 1;
filter_options->choice = LOFC_COMBINE;
string_list_append(&filter_options->filter_spec, "combine:");
strbuf_addstr(&filter_options->filter_spec, "combine:");
filter_spec_append_urlencode(
filter_options,
list_objects_filter_spec(&filter_options->sub[0]));
@ -240,7 +237,7 @@ static void transform_to_combine_type(
* We don't need the filter_spec strings for subfilter specs, only the
* top level.
*/
string_list_clear(&filter_options->sub[0].filter_spec, /*free_util=*/0);
strbuf_release(&filter_options->sub[0].filter_spec);
}
void list_objects_filter_die_if_populated(
@ -257,14 +254,11 @@ void parse_list_objects_filter(
struct strbuf errbuf = STRBUF_INIT;
int parse_error;
if (!filter_options->filter_spec.strdup_strings) {
if (filter_options->filter_spec.nr)
BUG("unexpected non-allocated string in filter_spec");
filter_options->filter_spec.strdup_strings = 1;
}
if (!filter_options->filter_spec.buf)
BUG("filter_options not properly initialized");
if (!filter_options->choice) {
string_list_append(&filter_options->filter_spec, arg);
strbuf_addstr(&filter_options->filter_spec, arg);
parse_error = gently_parse_list_objects_filter(
filter_options, arg, &errbuf);
@ -275,7 +269,7 @@ void parse_list_objects_filter(
*/
transform_to_combine_type(filter_options);
string_list_append(&filter_options->filter_spec, "+");
strbuf_addch(&filter_options->filter_spec, '+');
filter_spec_append_urlencode(filter_options, arg);
ALLOC_GROW_BY(filter_options->sub, filter_options->sub_nr, 1,
filter_options->sub_alloc);
@ -306,31 +300,18 @@ int opt_parse_list_objects_filter(const struct option *opt,
const char *list_objects_filter_spec(struct list_objects_filter_options *filter)
{
if (!filter->filter_spec.nr)
if (!filter->filter_spec.len)
BUG("no filter_spec available for this filter");
if (filter->filter_spec.nr != 1) {
struct strbuf concatted = STRBUF_INIT;
strbuf_add_separated_string_list(
&concatted, "", &filter->filter_spec);
string_list_clear(&filter->filter_spec, /*free_util=*/0);
string_list_append_nodup(
&filter->filter_spec, strbuf_detach(&concatted, NULL));
}
return filter->filter_spec.items[0].string;
return filter->filter_spec.buf;
}
const char *expand_list_objects_filter_spec(
struct list_objects_filter_options *filter)
{
if (filter->choice == LOFC_BLOB_LIMIT) {
struct strbuf expanded_spec = STRBUF_INIT;
strbuf_addf(&expanded_spec, "blob:limit=%lu",
strbuf_release(&filter->filter_spec);
strbuf_addf(&filter->filter_spec, "blob:limit=%lu",
filter->blob_limit_value);
string_list_clear(&filter->filter_spec, /*free_util=*/0);
string_list_append_nodup(
&filter->filter_spec,
strbuf_detach(&expanded_spec, NULL));
}
return list_objects_filter_spec(filter);
@ -343,12 +324,12 @@ void list_objects_filter_release(
if (!filter_options)
return;
string_list_clear(&filter_options->filter_spec, /*free_util=*/0);
strbuf_release(&filter_options->filter_spec);
free(filter_options->sparse_oid_name);
for (sub = 0; sub < filter_options->sub_nr; sub++)
list_objects_filter_release(&filter_options->sub[sub]);
free(filter_options->sub);
memset(filter_options, 0, sizeof(*filter_options));
list_objects_filter_init(filter_options);
}
void partial_clone_register(
@ -401,11 +382,11 @@ void partial_clone_get_default_filter_spec(
/*
* Parse default value, but silently ignore it if it is invalid.
*/
if (!promisor)
if (!promisor || !promisor->partial_clone_filter)
return;
string_list_append(&filter_options->filter_spec,
promisor->partial_clone_filter);
strbuf_addstr(&filter_options->filter_spec,
promisor->partial_clone_filter);
gently_parse_list_objects_filter(filter_options,
promisor->partial_clone_filter,
&errbuf);
@ -417,17 +398,21 @@ void list_objects_filter_copy(
const struct list_objects_filter_options *src)
{
int i;
struct string_list_item *item;
/* Copy everything. We will overwrite the pointers shortly. */
memcpy(dest, src, sizeof(struct list_objects_filter_options));
string_list_init_dup(&dest->filter_spec);
for_each_string_list_item(item, &src->filter_spec)
string_list_append(&dest->filter_spec, item->string);
strbuf_init(&dest->filter_spec, 0);
strbuf_addbuf(&dest->filter_spec, &src->filter_spec);
dest->sparse_oid_name = xstrdup_or_null(src->sparse_oid_name);
ALLOC_ARRAY(dest->sub, dest->sub_alloc);
for (i = 0; i < src->sub_nr; i++)
list_objects_filter_copy(&dest->sub[i], &src->sub[i]);
}
void list_objects_filter_init(struct list_objects_filter_options *filter_options)
{
struct list_objects_filter_options blank = LIST_OBJECTS_FILTER_INIT;
memcpy(filter_options, &blank, sizeof(*filter_options));
}

View File

@ -35,7 +35,7 @@ struct list_objects_filter_options {
* To get the raw filter spec given by the user, use the result of
* list_objects_filter_spec().
*/
struct string_list filter_spec;
struct strbuf filter_spec;
/*
* 'choice' is determined by parsing the filter-spec. This indicates
@ -69,6 +69,9 @@ struct list_objects_filter_options {
*/
};
#define LIST_OBJECTS_FILTER_INIT { .filter_spec = STRBUF_INIT }
void list_objects_filter_init(struct list_objects_filter_options *filter_options);
/*
* Parse value of the argument to the "filter" keyword.
* On the command line this looks like:

View File

@ -1907,6 +1907,7 @@ void repo_init_revisions(struct repository *r,
}
init_display_notes(&revs->notes_opt);
list_objects_filter_init(&revs->filter);
}
static void add_pending_commit_list(struct rev_info *revs,

View File

@ -819,6 +819,25 @@ static int cmd_delete(int argc, const char **argv)
return res;
}
static int cmd_help(int argc, const char **argv)
{
struct option options[] = {
OPT_END(),
};
const char * const usage[] = {
"scalar help",
NULL
};
argc = parse_options(argc, argv, NULL, options,
usage, 0);
if (argc != 0)
usage_with_options(usage, options);
return run_git("help", "scalar", NULL);
}
static int cmd_version(int argc, const char **argv)
{
int verbose = 0, build_options = 0;
@ -858,6 +877,7 @@ static struct {
{ "run", cmd_run },
{ "reconfigure", cmd_reconfigure },
{ "delete", cmd_delete },
{ "help", cmd_help },
{ "version", cmd_version },
{ "diagnose", cmd_diagnose },
{ NULL, NULL},

View File

@ -36,14 +36,21 @@ CHAINLINTTMP_SQ = $(subst ','\'',$(CHAINLINTTMP))
T = $(sort $(wildcard t[0-9][0-9][0-9][0-9]-*.sh))
THELPERS = $(sort $(filter-out $(T),$(wildcard *.sh)))
TLIBS = $(sort $(wildcard lib-*.sh)) annotate-tests.sh
TPERF = $(sort $(wildcard perf/p[0-9][0-9][0-9][0-9]-*.sh))
TINTEROP = $(sort $(wildcard interop/i[0-9][0-9][0-9][0-9]-*.sh))
CHAINLINTTESTS = $(sort $(patsubst chainlint/%.test,%,$(wildcard chainlint/*.test)))
CHAINLINT = sed -f chainlint.sed
CHAINLINT = '$(PERL_PATH_SQ)' chainlint.pl
# `test-chainlint` (which is a dependency of `test-lint`, `test` and `prove`)
# checks all tests in all scripts via a single invocation, so tell individual
# scripts not to "chainlint" themselves
CHAINLINTSUPPRESS = GIT_TEST_CHAIN_LINT=0 && export GIT_TEST_CHAIN_LINT &&
all: $(DEFAULT_TEST_TARGET)
test: pre-clean check-chainlint $(TEST_LINT)
$(MAKE) aggregate-results-and-cleanup
$(CHAINLINTSUPPRESS) $(MAKE) aggregate-results-and-cleanup
failed:
@failed=$$(cd '$(TEST_RESULTS_DIRECTORY_SQ)' && \
@ -52,7 +59,7 @@ failed:
test -z "$$failed" || $(MAKE) $$failed
prove: pre-clean check-chainlint $(TEST_LINT)
@echo "*** prove ***"; $(PROVE) --exec '$(TEST_SHELL_PATH_SQ)' $(GIT_PROVE_OPTS) $(T) :: $(GIT_TEST_OPTS)
@echo "*** prove ***"; $(CHAINLINTSUPPRESS) $(PROVE) --exec '$(TEST_SHELL_PATH_SQ)' $(GIT_PROVE_OPTS) $(T) :: $(GIT_TEST_OPTS)
$(MAKE) clean-except-prove-cache
$(T):
@ -73,13 +80,35 @@ clean-chainlint:
check-chainlint:
@mkdir -p '$(CHAINLINTTMP_SQ)' && \
sed -e '/^# LINT: /d' $(patsubst %,chainlint/%.test,$(CHAINLINTTESTS)) >'$(CHAINLINTTMP_SQ)'/tests && \
sed -e '/^[ ]*$$/d' $(patsubst %,chainlint/%.expect,$(CHAINLINTTESTS)) >'$(CHAINLINTTMP_SQ)'/expect && \
$(CHAINLINT) '$(CHAINLINTTMP_SQ)'/tests | grep -v '^[ ]*$$' >'$(CHAINLINTTMP_SQ)'/actual && \
diff -u '$(CHAINLINTTMP_SQ)'/expect '$(CHAINLINTTMP_SQ)'/actual
for i in $(CHAINLINTTESTS); do \
echo "test_expect_success '$$i' '" && \
sed -e '/^# LINT: /d' chainlint/$$i.test && \
echo "'"; \
done >'$(CHAINLINTTMP_SQ)'/tests && \
{ \
echo "# chainlint: $(CHAINLINTTMP_SQ)/tests" && \
for i in $(CHAINLINTTESTS); do \
echo "# chainlint: $$i" && \
sed -e '/^[ ]*$$/d' chainlint/$$i.expect; \
done \
} >'$(CHAINLINTTMP_SQ)'/expect && \
$(CHAINLINT) --emit-all '$(CHAINLINTTMP_SQ)'/tests | \
grep -v '^[ ]*$$' >'$(CHAINLINTTMP_SQ)'/actual && \
if test -f ../GIT-BUILD-OPTIONS; then \
. ../GIT-BUILD-OPTIONS; \
fi && \
if test -x ../git$$X; then \
DIFFW="../git$$X --no-pager diff -w --no-index"; \
else \
DIFFW="diff -w -u"; \
fi && \
$$DIFFW '$(CHAINLINTTMP_SQ)'/expect '$(CHAINLINTTMP_SQ)'/actual
test-lint: test-lint-duplicates test-lint-executable test-lint-shell-syntax \
test-lint-filenames
ifneq ($(GIT_TEST_CHAIN_LINT),0)
test-lint: test-chainlint
endif
test-lint-duplicates:
@dups=`echo $(T) $(TPERF) | tr ' ' '\n' | sed 's/-.*//' | sort | uniq -d` && \
@ -102,6 +131,9 @@ test-lint-filenames:
test -z "$$bad" || { \
echo >&2 "non-portable file name(s): $$bad"; exit 1; }
test-chainlint:
@$(CHAINLINT) $(T) $(TLIBS) $(TPERF) $(TINTEROP)
aggregate-results-and-cleanup: $(T)
$(MAKE) aggregate-results
$(MAKE) clean
@ -117,4 +149,5 @@ valgrind:
perf:
$(MAKE) -C perf/ all
.PHONY: pre-clean $(T) aggregate-results clean valgrind perf check-chainlint clean-chainlint
.PHONY: pre-clean $(T) aggregate-results clean valgrind perf \
check-chainlint clean-chainlint test-chainlint

View File

@ -196,11 +196,6 @@ appropriately before running "make". Short options can be bundled, i.e.
this feature by setting the GIT_TEST_CHAIN_LINT environment
variable to "1" or "0", respectively.
A few test scripts disable some of the more advanced
chain-linting detection in the name of efficiency. You can
override this by setting the GIT_TEST_CHAIN_LINT_HARDER
environment variable to "1".
--stress::
Run the test script repeatedly in multiple parallel jobs until
one of them fails. Useful for reproducing rare failures in

770
t/chainlint.pl Executable file
View File

@ -0,0 +1,770 @@
#!/usr/bin/env perl
#
# Copyright (c) 2021-2022 Eric Sunshine <sunshine@sunshineco.com>
#
# This tool scans shell scripts for test definitions and checks those tests for
# problems, such as broken &&-chains, which might hide bugs in the tests
# themselves or in behaviors being exercised by the tests.
#
# Input arguments are pathnames of shell scripts containing test definitions,
# or globs referencing a collection of scripts. For each problem discovered,
# the pathname of the script containing the test is printed along with the test
# name and the test body with a `?!FOO?!` annotation at the location of each
# detected problem, where "FOO" is a tag such as "AMP" which indicates a broken
# &&-chain. Returns zero if no problems are discovered, otherwise non-zero.
use warnings;
use strict;
use Config;
use File::Glob;
use Getopt::Long;
my $jobs = -1;
my $show_stats;
my $emit_all;
# Lexer tokenizes POSIX shell scripts. It is roughly modeled after section 2.3
# "Token Recognition" of POSIX chapter 2 "Shell Command Language". Although
# similar to lexical analyzers for other languages, this one differs in a few
# substantial ways due to quirks of the shell command language.
#
# For instance, in many languages, newline is just whitespace like space or
# TAB, but in shell a newline is a command separator, thus a distinct lexical
# token. A newline is significant and returned as a distinct token even at the
# end of a shell comment.
#
# In other languages, `1+2` would typically be scanned as three tokens
# (`1`, `+`, and `2`), but in shell it is a single token. However, the similar
# `1 + 2`, which embeds whitepace, is scanned as three token in shell, as well.
# In shell, several characters with special meaning lose that meaning when not
# surrounded by whitespace. For instance, the negation operator `!` is special
# when standing alone surrounded by whitespace; whereas in `foo!uucp` it is
# just a plain character in the longer token "foo!uucp". In many other
# languages, `"string"/foo:'string'` might be scanned as five tokens ("string",
# `/`, `foo`, `:`, and 'string'), but in shell, it is just a single token.
#
# The lexical analyzer for the shell command language is also somewhat unusual
# in that it recursively invokes the parser to handle the body of `$(...)`
# expressions which can contain arbitrary shell code. Such expressions may be
# encountered both inside and outside of double-quoted strings.
#
# The lexical analyzer is responsible for consuming shell here-doc bodies which
# extend from the line following a `<<TAG` operator until a line consisting
# solely of `TAG`. Here-doc consumption begins when a newline is encountered.
# It is legal for multiple here-doc `<<TAG` operators to be present on a single
# line, in which case their bodies must be present one following the next, and
# are consumed in the (left-to-right) order the `<<TAG` operators appear on the
# line. A special complication is that the bodies of all here-docs must be
# consumed when the newline is encountered even if the parse context depth has
# changed. For instance, in `cat <<A && x=$(cat <<B &&\n`, bodies of here-docs
# "A" and "B" must be consumed even though "A" was introduced outside the
# recursive parse context in which "B" was introduced and in which the newline
# is encountered.
package Lexer;
sub new {
my ($class, $parser, $s) = @_;
bless {
parser => $parser,
buff => $s,
heretags => []
} => $class;
}
sub scan_heredoc_tag {
my $self = shift @_;
${$self->{buff}} =~ /\G(-?)/gc;
my $indented = $1;
my $tag = $self->scan_token();
$tag =~ s/['"\\]//g;
push(@{$self->{heretags}}, $indented ? "\t$tag" : "$tag");
return "<<$indented$tag";
}
sub scan_op {
my ($self, $c) = @_;
my $b = $self->{buff};
return $c unless $$b =~ /\G(.)/sgc;
my $cc = $c . $1;
return scan_heredoc_tag($self) if $cc eq '<<';
return $cc if $cc =~ /^(?:&&|\|\||>>|;;|<&|>&|<>|>\|)$/;
pos($$b)--;
return $c;
}
sub scan_sqstring {
my $self = shift @_;
${$self->{buff}} =~ /\G([^']*'|.*\z)/sgc;
return "'" . $1;
}
sub scan_dqstring {
my $self = shift @_;
my $b = $self->{buff};
my $s = '"';
while (1) {
# slurp up non-special characters
$s .= $1 if $$b =~ /\G([^"\$\\]+)/gc;
# handle special characters
last unless $$b =~ /\G(.)/sgc;
my $c = $1;
$s .= '"', last if $c eq '"';
$s .= '$' . $self->scan_dollar(), next if $c eq '$';
if ($c eq '\\') {
$s .= '\\', last unless $$b =~ /\G(.)/sgc;
$c = $1;
next if $c eq "\n"; # line splice
# backslash escapes only $, `, ", \ in dq-string
$s .= '\\' unless $c =~ /^[\$`"\\]$/;
$s .= $c;
next;
}
die("internal error scanning dq-string '$c'\n");
}
return $s;
}
sub scan_balanced {
my ($self, $c1, $c2) = @_;
my $b = $self->{buff};
my $depth = 1;
my $s = $c1;
while ($$b =~ /\G([^\Q$c1$c2\E]*(?:[\Q$c1$c2\E]|\z))/gc) {
$s .= $1;
$depth++, next if $s =~ /\Q$c1\E$/;
$depth--;
last if $depth == 0;
}
return $s;
}
sub scan_subst {
my $self = shift @_;
my @tokens = $self->{parser}->parse(qr/^\)$/);
$self->{parser}->next_token(); # closing ")"
return @tokens;
}
sub scan_dollar {
my $self = shift @_;
my $b = $self->{buff};
return $self->scan_balanced('(', ')') if $$b =~ /\G\((?=\()/gc; # $((...))
return '(' . join(' ', $self->scan_subst()) . ')' if $$b =~ /\G\(/gc; # $(...)
return $self->scan_balanced('{', '}') if $$b =~ /\G\{/gc; # ${...}
return $1 if $$b =~ /\G(\w+)/gc; # $var
return $1 if $$b =~ /\G([@*#?$!0-9-])/gc; # $*, $1, $$, etc.
return '';
}
sub swallow_heredocs {
my $self = shift @_;
my $b = $self->{buff};
my $tags = $self->{heretags};
while (my $tag = shift @$tags) {
my $indent = $tag =~ s/^\t// ? '\\s*' : '';
$$b =~ /(?:\G|\n)$indent\Q$tag\E(?:\n|\z)/gc;
}
}
sub scan_token {
my $self = shift @_;
my $b = $self->{buff};
my $token = '';
RESTART:
$$b =~ /\G[ \t]+/gc; # skip whitespace (but not newline)
return "\n" if $$b =~ /\G#[^\n]*(?:\n|\z)/gc; # comment
while (1) {
# slurp up non-special characters
$token .= $1 if $$b =~ /\G([^\\;&|<>(){}'"\$\s]+)/gc;
# handle special characters
last unless $$b =~ /\G(.)/sgc;
my $c = $1;
last if $c =~ /^[ \t]$/; # whitespace ends token
pos($$b)--, last if length($token) && $c =~ /^[;&|<>(){}\n]$/;
$token .= $self->scan_sqstring(), next if $c eq "'";
$token .= $self->scan_dqstring(), next if $c eq '"';
$token .= $c . $self->scan_dollar(), next if $c eq '$';
$self->swallow_heredocs(), $token = $c, last if $c eq "\n";
$token = $self->scan_op($c), last if $c =~ /^[;&|<>]$/;
$token = $c, last if $c =~ /^[(){}]$/;
if ($c eq '\\') {
$token .= '\\', last unless $$b =~ /\G(.)/sgc;
$c = $1;
next if $c eq "\n" && length($token); # line splice
goto RESTART if $c eq "\n"; # line splice
$token .= '\\' . $c;
next;
}
die("internal error scanning character '$c'\n");
}
return length($token) ? $token : undef;
}
# ShellParser parses POSIX shell scripts (with minor extensions for Bash). It
# is a recursive descent parser very roughly modeled after section 2.10 "Shell
# Grammar" of POSIX chapter 2 "Shell Command Language".
package ShellParser;
sub new {
my ($class, $s) = @_;
my $self = bless {
buff => [],
stop => [],
output => []
} => $class;
$self->{lexer} = Lexer->new($self, $s);
return $self;
}
sub next_token {
my $self = shift @_;
return pop(@{$self->{buff}}) if @{$self->{buff}};
return $self->{lexer}->scan_token();
}
sub untoken {
my $self = shift @_;
push(@{$self->{buff}}, @_);
}
sub peek {
my $self = shift @_;
my $token = $self->next_token();
return undef unless defined($token);
$self->untoken($token);
return $token;
}
sub stop_at {
my ($self, $token) = @_;
return 1 unless defined($token);
my $stop = ${$self->{stop}}[-1] if @{$self->{stop}};
return defined($stop) && $token =~ $stop;
}
sub expect {
my ($self, $expect) = @_;
my $token = $self->next_token();
return $token if defined($token) && $token eq $expect;
push(@{$self->{output}}, "?!ERR?! expected '$expect' but found '" . (defined($token) ? $token : "<end-of-input>") . "'\n");
$self->untoken($token) if defined($token);
return ();
}
sub optional_newlines {
my $self = shift @_;
my @tokens;
while (my $token = $self->peek()) {
last unless $token eq "\n";
push(@tokens, $self->next_token());
}
return @tokens;
}
sub parse_group {
my $self = shift @_;
return ($self->parse(qr/^}$/),
$self->expect('}'));
}
sub parse_subshell {
my $self = shift @_;
return ($self->parse(qr/^\)$/),
$self->expect(')'));
}
sub parse_case_pattern {
my $self = shift @_;
my @tokens;
while (defined(my $token = $self->next_token())) {
push(@tokens, $token);
last if $token eq ')';
}
return @tokens;
}
sub parse_case {
my $self = shift @_;
my @tokens;
push(@tokens,
$self->next_token(), # subject
$self->optional_newlines(),
$self->expect('in'),
$self->optional_newlines());
while (1) {
my $token = $self->peek();
last unless defined($token) && $token ne 'esac';
push(@tokens,
$self->parse_case_pattern(),
$self->optional_newlines(),
$self->parse(qr/^(?:;;|esac)$/)); # item body
$token = $self->peek();
last unless defined($token) && $token ne 'esac';
push(@tokens,
$self->expect(';;'),
$self->optional_newlines());
}
push(@tokens, $self->expect('esac'));
return @tokens;
}
sub parse_for {
my $self = shift @_;
my @tokens;
push(@tokens,
$self->next_token(), # variable
$self->optional_newlines());
my $token = $self->peek();
if (defined($token) && $token eq 'in') {
push(@tokens,
$self->expect('in'),
$self->optional_newlines());
}
push(@tokens,
$self->parse(qr/^do$/), # items
$self->expect('do'),
$self->optional_newlines(),
$self->parse_loop_body(),
$self->expect('done'));
return @tokens;
}
sub parse_if {
my $self = shift @_;
my @tokens;
while (1) {
push(@tokens,
$self->parse(qr/^then$/), # if/elif condition
$self->expect('then'),
$self->optional_newlines(),
$self->parse(qr/^(?:elif|else|fi)$/)); # if/elif body
my $token = $self->peek();
last unless defined($token) && $token eq 'elif';
push(@tokens, $self->expect('elif'));
}
my $token = $self->peek();
if (defined($token) && $token eq 'else') {
push(@tokens,
$self->expect('else'),
$self->optional_newlines(),
$self->parse(qr/^fi$/)); # else body
}
push(@tokens, $self->expect('fi'));
return @tokens;
}
sub parse_loop_body {
my $self = shift @_;
return $self->parse(qr/^done$/);
}
sub parse_loop {
my $self = shift @_;
return ($self->parse(qr/^do$/), # condition
$self->expect('do'),
$self->optional_newlines(),
$self->parse_loop_body(),
$self->expect('done'));
}
sub parse_func {
my $self = shift @_;
return ($self->expect('('),
$self->expect(')'),
$self->optional_newlines(),
$self->parse_cmd()); # body
}
sub parse_bash_array_assignment {
my $self = shift @_;
my @tokens = $self->expect('(');
while (defined(my $token = $self->next_token())) {
push(@tokens, $token);
last if $token eq ')';
}
return @tokens;
}
my %compound = (
'{' => \&parse_group,
'(' => \&parse_subshell,
'case' => \&parse_case,
'for' => \&parse_for,
'if' => \&parse_if,
'until' => \&parse_loop,
'while' => \&parse_loop);
sub parse_cmd {
my $self = shift @_;
my $cmd = $self->next_token();
return () unless defined($cmd);
return $cmd if $cmd eq "\n";
my $token;
my @tokens = $cmd;
if ($cmd eq '!') {
push(@tokens, $self->parse_cmd());
return @tokens;
} elsif (my $f = $compound{$cmd}) {
push(@tokens, $self->$f());
} elsif (defined($token = $self->peek()) && $token eq '(') {
if ($cmd !~ /\w=$/) {
push(@tokens, $self->parse_func());
return @tokens;
}
$tokens[-1] .= join(' ', $self->parse_bash_array_assignment());
}
while (defined(my $token = $self->next_token())) {
$self->untoken($token), last if $self->stop_at($token);
push(@tokens, $token);
last if $token =~ /^(?:[;&\n|]|&&|\|\|)$/;
}
push(@tokens, $self->next_token()) if $tokens[-1] ne "\n" && defined($token = $self->peek()) && $token eq "\n";
return @tokens;
}
sub accumulate {
my ($self, $tokens, $cmd) = @_;
push(@$tokens, @$cmd);
}
sub parse {
my ($self, $stop) = @_;
push(@{$self->{stop}}, $stop);
goto DONE if $self->stop_at($self->peek());
my @tokens;
while (my @cmd = $self->parse_cmd()) {
$self->accumulate(\@tokens, \@cmd);
last if $self->stop_at($self->peek());
}
DONE:
pop(@{$self->{stop}});
return @tokens;
}
# TestParser is a subclass of ShellParser which, beyond parsing shell script
# code, is also imbued with semantic knowledge of test construction, and checks
# tests for common problems (such as broken &&-chains) which might hide bugs in
# the tests themselves or in behaviors being exercised by the tests. As such,
# TestParser is only called upon to parse test bodies, not the top-level
# scripts in which the tests are defined.
package TestParser;
use base 'ShellParser';
sub find_non_nl {
my $tokens = shift @_;
my $n = shift @_;
$n = $#$tokens if !defined($n);
$n-- while $n >= 0 && $$tokens[$n] eq "\n";
return $n;
}
sub ends_with {
my ($tokens, $needles) = @_;
my $n = find_non_nl($tokens);
for my $needle (reverse(@$needles)) {
return undef if $n < 0;
$n = find_non_nl($tokens, $n), next if $needle eq "\n";
return undef if $$tokens[$n] !~ $needle;
$n--;
}
return 1;
}
sub match_ending {
my ($tokens, $endings) = @_;
for my $needles (@$endings) {
next if @$tokens < scalar(grep {$_ ne "\n"} @$needles);
return 1 if ends_with($tokens, $needles);
}
return undef;
}
sub parse_loop_body {
my $self = shift @_;
my @tokens = $self->SUPER::parse_loop_body(@_);
# did loop signal failure via "|| return" or "|| exit"?
return @tokens if !@tokens || grep(/^(?:return|exit|\$\?)$/, @tokens);
# did loop upstream of a pipe signal failure via "|| echo 'impossible
# text'" as the final command in the loop body?
return @tokens if ends_with(\@tokens, [qr/^\|\|$/, "\n", qr/^echo$/, qr/^.+$/]);
# flag missing "return/exit" handling explicit failure in loop body
my $n = find_non_nl(\@tokens);
splice(@tokens, $n + 1, 0, '?!LOOP?!');
return @tokens;
}
my @safe_endings = (
[qr/^(?:&&|\|\||\||&)$/],
[qr/^(?:exit|return)$/, qr/^(?:\d+|\$\?)$/],
[qr/^(?:exit|return)$/, qr/^(?:\d+|\$\?)$/, qr/^;$/],
[qr/^(?:exit|return|continue)$/],
[qr/^(?:exit|return|continue)$/, qr/^;$/]);
sub accumulate {
my ($self, $tokens, $cmd) = @_;
goto DONE unless @$tokens;
goto DONE if @$cmd == 1 && $$cmd[0] eq "\n";
# did previous command end with "&&", "|", "|| return" or similar?
goto DONE if match_ending($tokens, \@safe_endings);
# if this command handles "$?" specially, then okay for previous
# command to be missing "&&"
for my $token (@$cmd) {
goto DONE if $token =~ /\$\?/;
}
# if this command is "false", "return 1", or "exit 1" (which signal
# failure explicitly), then okay for all preceding commands to be
# missing "&&"
if ($$cmd[0] =~ /^(?:false|return|exit)$/) {
@$tokens = grep(!/^\?!AMP\?!$/, @$tokens);
goto DONE;
}
# flag missing "&&" at end of previous command
my $n = find_non_nl($tokens);
splice(@$tokens, $n + 1, 0, '?!AMP?!') unless $n < 0;
DONE:
$self->SUPER::accumulate($tokens, $cmd);
}
# ScriptParser is a subclass of ShellParser which identifies individual test
# definitions within test scripts, and passes each test body through TestParser
# to identify possible problems. ShellParser detects test definitions not only
# at the top-level of test scripts but also within compound commands such as
# loops and function definitions.
package ScriptParser;
use base 'ShellParser';
sub new {
my $class = shift @_;
my $self = $class->SUPER::new(@_);
$self->{ntests} = 0;
return $self;
}
# extract the raw content of a token, which may be a single string or a
# composition of multiple strings and non-string character runs; for instance,
# `"test body"` unwraps to `test body`; `word"a b"42'c d'` to `worda b42c d`
sub unwrap {
my $token = @_ ? shift @_ : $_;
# simple case: 'sqstring' or "dqstring"
return $token if $token =~ s/^'([^']*)'$/$1/;
return $token if $token =~ s/^"([^"]*)"$/$1/;
# composite case
my ($s, $q, $escaped);
while (1) {
# slurp up non-special characters
$s .= $1 if $token =~ /\G([^\\'"]*)/gc;
# handle special characters
last unless $token =~ /\G(.)/sgc;
my $c = $1;
$q = undef, next if defined($q) && $c eq $q;
$q = $c, next if !defined($q) && $c =~ /^['"]$/;
if ($c eq '\\') {
last unless $token =~ /\G(.)/sgc;
$c = $1;
$s .= '\\' if $c eq "\n"; # preserve line splice
}
$s .= $c;
}
return $s
}
sub check_test {
my $self = shift @_;
my ($title, $body) = map(unwrap, @_);
$self->{ntests}++;
my $parser = TestParser->new(\$body);
my @tokens = $parser->parse();
return unless $emit_all || grep(/\?![^?]+\?!/, @tokens);
my $c = main::fd_colors(1);
my $checked = join(' ', @tokens);
$checked =~ s/^\n//;
$checked =~ s/^ //mg;
$checked =~ s/ $//mg;
$checked =~ s/(\?![^?]+\?!)/$c->{rev}$c->{red}$1$c->{reset}/mg;
$checked .= "\n" unless $checked =~ /\n$/;
push(@{$self->{output}}, "$c->{blue}# chainlint: $title$c->{reset}\n$checked");
}
sub parse_cmd {
my $self = shift @_;
my @tokens = $self->SUPER::parse_cmd();
return @tokens unless @tokens && $tokens[0] =~ /^test_expect_(?:success|failure)$/;
my $n = $#tokens;
$n-- while $n >= 0 && $tokens[$n] =~ /^(?:[;&\n|]|&&|\|\|)$/;
$self->check_test($tokens[1], $tokens[2]) if $n == 2; # title body
$self->check_test($tokens[2], $tokens[3]) if $n > 2; # prereq title body
return @tokens;
}
# main contains high-level functionality for processing command-line switches,
# feeding input test scripts to ScriptParser, and reporting results.
package main;
my $getnow = sub { return time(); };
my $interval = sub { return time() - shift; };
if (eval {require Time::HiRes; Time::HiRes->import(); 1;}) {
$getnow = sub { return [Time::HiRes::gettimeofday()]; };
$interval = sub { return Time::HiRes::tv_interval(shift); };
}
# Restore TERM if test framework set it to "dumb" so 'tput' will work; do this
# outside of get_colors() since under 'ithreads' all threads use %ENV of main
# thread and ignore %ENV changes in subthreads.
$ENV{TERM} = $ENV{USER_TERM} if $ENV{USER_TERM};
my @NOCOLORS = (bold => '', rev => '', reset => '', blue => '', green => '', red => '');
my %COLORS = ();
sub get_colors {
return \%COLORS if %COLORS;
if (exists($ENV{NO_COLOR}) ||
system("tput sgr0 >/dev/null 2>&1") != 0 ||
system("tput bold >/dev/null 2>&1") != 0 ||
system("tput rev >/dev/null 2>&1") != 0 ||
system("tput setaf 1 >/dev/null 2>&1") != 0) {
%COLORS = @NOCOLORS;
return \%COLORS;
}
%COLORS = (bold => `tput bold`,
rev => `tput rev`,
reset => `tput sgr0`,
blue => `tput setaf 4`,
green => `tput setaf 2`,
red => `tput setaf 1`);
chomp(%COLORS);
return \%COLORS;
}
my %FD_COLORS = ();
sub fd_colors {
my $fd = shift;
return $FD_COLORS{$fd} if exists($FD_COLORS{$fd});
$FD_COLORS{$fd} = -t $fd ? get_colors() : {@NOCOLORS};
return $FD_COLORS{$fd};
}
sub ncores {
# Windows
return $ENV{NUMBER_OF_PROCESSORS} if exists($ENV{NUMBER_OF_PROCESSORS});
# Linux / MSYS2 / Cygwin / WSL
do { local @ARGV='/proc/cpuinfo'; return scalar(grep(/^processor\s*:/, <>)); } if -r '/proc/cpuinfo';
# macOS & BSD
return qx/sysctl -n hw.ncpu/ if $^O =~ /(?:^darwin$|bsd)/;
return 1;
}
sub show_stats {
my ($start_time, $stats) = @_;
my $walltime = $interval->($start_time);
my ($usertime) = times();
my ($total_workers, $total_scripts, $total_tests, $total_errs) = (0, 0, 0, 0);
my $c = fd_colors(2);
print(STDERR $c->{green});
for (@$stats) {
my ($worker, $nscripts, $ntests, $nerrs) = @$_;
print(STDERR "worker $worker: $nscripts scripts, $ntests tests, $nerrs errors\n");
$total_workers++;
$total_scripts += $nscripts;
$total_tests += $ntests;
$total_errs += $nerrs;
}
printf(STDERR "total: %d workers, %d scripts, %d tests, %d errors, %.2fs/%.2fs (wall/user)$c->{reset}\n", $total_workers, $total_scripts, $total_tests, $total_errs, $walltime, $usertime);
}
sub check_script {
my ($id, $next_script, $emit) = @_;
my ($nscripts, $ntests, $nerrs) = (0, 0, 0);
while (my $path = $next_script->()) {
$nscripts++;
my $fh;
unless (open($fh, "<", $path)) {
$emit->("?!ERR?! $path: $!\n");
next;
}
my $s = do { local $/; <$fh> };
close($fh);
my $parser = ScriptParser->new(\$s);
1 while $parser->parse_cmd();
if (@{$parser->{output}}) {
my $c = fd_colors(1);
my $s = join('', @{$parser->{output}});
$emit->("$c->{bold}$c->{blue}# chainlint: $path$c->{reset}\n" . $s);
$nerrs += () = $s =~ /\?![^?]+\?!/g;
}
$ntests += $parser->{ntests};
}
return [$id, $nscripts, $ntests, $nerrs];
}
sub exit_code {
my $stats = shift @_;
for (@$stats) {
my ($worker, $nscripts, $ntests, $nerrs) = @$_;
return 1 if $nerrs;
}
return 0;
}
Getopt::Long::Configure(qw{bundling});
GetOptions(
"emit-all!" => \$emit_all,
"jobs|j=i" => \$jobs,
"stats|show-stats!" => \$show_stats) or die("option error\n");
$jobs = ncores() if $jobs < 1;
my $start_time = $getnow->();
my @stats;
my @scripts;
push(@scripts, File::Glob::bsd_glob($_)) for (@ARGV);
unless (@scripts) {
show_stats($start_time, \@stats) if $show_stats;
exit;
}
unless ($Config{useithreads} && eval {
require threads; threads->import();
require Thread::Queue; Thread::Queue->import();
1;
}) {
push(@stats, check_script(1, sub { shift(@scripts); }, sub { print(@_); }));
show_stats($start_time, \@stats) if $show_stats;
exit(exit_code(\@stats));
}
my $script_queue = Thread::Queue->new();
my $output_queue = Thread::Queue->new();
sub next_script { return $script_queue->dequeue(); }
sub emit { $output_queue->enqueue(@_); }
sub monitor {
while (my $s = $output_queue->dequeue()) {
print($s);
}
}
my $mon = threads->create({'context' => 'void'}, \&monitor);
threads->create({'context' => 'list'}, \&check_script, $_, \&next_script, \&emit) for 1..$jobs;
$script_queue->enqueue(@scripts);
$script_queue->end();
for (threads->list()) {
push(@stats, $_->join()) unless $_ == $mon;
}
$output_queue->end();
$mon->join();
show_stats($start_time, \@stats) if $show_stats;
exit(exit_code(\@stats));

View File

@ -1,399 +0,0 @@
#------------------------------------------------------------------------------
# Detect broken &&-chains in tests.
#
# At present, only &&-chains in subshells are examined by this linter;
# top-level &&-chains are instead checked directly by the test framework. Like
# the top-level &&-chain linter, the subshell linter (intentionally) does not
# check &&-chains within {...} blocks.
#
# Checking for &&-chain breakage is done line-by-line by pure textual
# inspection.
#
# Incomplete lines (those ending with "\") are stitched together with following
# lines to simplify processing, particularly of "one-liner" statements.
# Top-level here-docs are swallowed to avoid false positives within the
# here-doc body, although the statement to which the here-doc is attached is
# retained.
#
# Heuristics are used to detect end-of-subshell when the closing ")" is cuddled
# with the final subshell statement on the same line:
#
# (cd foo &&
# bar)
#
# in order to avoid misinterpreting the ")" in constructs such as "x=$(...)"
# and "case $x in *)" as ending the subshell.
#
# Lines missing a final "&&" are flagged with "?!AMP?!", as are lines which
# chain commands with ";" internally rather than "&&". A line may be flagged
# for both violations.
#
# Detection of a missing &&-link in a multi-line subshell is complicated by the
# fact that the last statement before the closing ")" must not end with "&&".
# Since processing is line-by-line, it is not known whether a missing "&&" is
# legitimate or not until the _next_ line is seen. To accommodate this, within
# multi-line subshells, each line is stored in sed's "hold" area until after
# the next line is seen and processed. If the next line is a stand-alone ")",
# then a missing "&&" on the previous line is legitimate; otherwise a missing
# "&&" is a break in the &&-chain.
#
# (
# cd foo &&
# bar
# )
#
# In practical terms, when "bar" is encountered, it is flagged with "?!AMP?!",
# but when the stand-alone ")" line is seen which closes the subshell, the
# "?!AMP?!" violation is removed from the "bar" line (retrieved from the "hold"
# area) since the final statement of a subshell must not end with "&&". The
# final line of a subshell may still break the &&-chain by using ";" internally
# to chain commands together rather than "&&", but an internal "?!AMP?!" is
# never removed from a line even though a line-ending "?!AMP?!" might be.
#
# Care is taken to recognize the last _statement_ of a multi-line subshell, not
# necessarily the last textual _line_ within the subshell, since &&-chaining
# applies to statements, not to lines. Consequently, blank lines, comment
# lines, and here-docs are swallowed (but not the command to which the here-doc
# is attached), leaving the last statement in the "hold" area, not the last
# line, thus simplifying &&-link checking.
#
# The final statement before "done" in for- and while-loops, and before "elif",
# "else", and "fi" in if-then-else likewise must not end with "&&", thus
# receives similar treatment.
#
# Swallowing here-docs with arbitrary tags requires a bit of finesse. When a
# line such as "cat <<EOF" is seen, the here-doc tag is copied to the front of
# the line enclosed in angle brackets as a sentinel, giving "<EOF>cat <<EOF".
# As each subsequent line is read, it is appended to the target line and a
# (whitespace-loose) back-reference match /^<(.*)>\n\1$/ is attempted to see if
# the content inside "<...>" matches the entirety of the newly-read line. For
# instance, if the next line read is "some data", when concatenated with the
# target line, it becomes "<EOF>cat <<EOF\nsome data", and a match is attempted
# to see if "EOF" matches "some data". Since it doesn't, the next line is
# attempted. When a line consisting of only "EOF" (and possible whitespace) is
# encountered, it is appended to the target line giving "<EOF>cat <<EOF\nEOF",
# in which case the "EOF" inside "<...>" does match the text following the
# newline, thus the closing here-doc tag has been found. The closing tag line
# and the "<...>" prefix on the target line are then discarded, leaving just
# the target line "cat <<EOF".
#------------------------------------------------------------------------------
# incomplete line -- slurp up next line
:squash
/\\$/ {
N
s/\\\n//
bsquash
}
# here-doc -- swallow it to avoid false hits within its body (but keep the
# command to which it was attached)
/<<-*[ ]*[\\'"]*[A-Za-z0-9_]/ {
/"[^"]*<<[^"]*"/bnotdoc
s/^\(.*<<-*[ ]*\)[\\'"]*\([A-Za-z0-9_][A-Za-z0-9_]*\)['"]*/<\2>\1\2/
:hered
N
/^<\([^>]*\)>.*\n[ ]*\1[ ]*$/!{
s/\n.*$//
bhered
}
s/^<[^>]*>//
s/\n.*$//
}
:notdoc
# one-liner "(...) &&"
/^[ ]*!*[ ]*(..*)[ ]*&&[ ]*$/boneline
# same as above but without trailing "&&"
/^[ ]*!*[ ]*(..*)[ ]*$/boneline
# one-liner "(...) >x" (or "2>x" or "<x" or "|x" or "&"
/^[ ]*!*[ ]*(..*)[ ]*[0-9]*[<>|&]/boneline
# multi-line "(...\n...)"
/^[ ]*(/bsubsh
# innocuous line -- print it and advance to next line
b
# found one-liner "(...)" -- mark suspect if it uses ";" internally rather than
# "&&" (but not ";" in a string)
:oneline
/;/{
/"[^"]*;[^"]*"/!s/;/; ?!AMP?!/
}
b
:subsh
# bare "(" line? -- stash for later printing
/^[ ]*([ ]*$/ {
h
bnextln
}
# "(..." line -- "(" opening subshell cuddled with command; temporarily replace
# "(" with sentinel "^" and process the line as if "(" had been seen solo on
# the preceding line; this temporary replacement prevents several rules from
# accidentally thinking "(" introduces a nested subshell; "^" is changed back
# to "(" at output time
x
s/.*//
x
s/(/^/
bslurp
:nextln
N
s/.*\n//
:slurp
# incomplete line "...\"
/\\$/bicmplte
# multi-line quoted string "...\n..."?
/"/bdqstr
# multi-line quoted string '...\n...'? (but not contraction in string "it's")
/'/{
/"[^'"]*'[^'"]*"/!bsqstr
}
:folded
# here-doc -- swallow it (but not "<<" in a string)
/<<-*[ ]*[\\'"]*[A-Za-z0-9_]/{
/"[^"]*<<[^"]*"/!bheredoc
}
# comment or empty line -- discard since final non-comment, non-empty line
# before closing ")", "done", "elsif", "else", or "fi" will need to be
# re-visited to drop "suspect" marking since final line of those constructs
# legitimately lacks "&&", so "suspect" mark must be removed
/^[ ]*#/bnextln
/^[ ]*$/bnextln
# in-line comment -- strip it (but not "#" in a string, Bash ${#...} array
# length, or Perforce "//depot/path#42" revision in filespec)
/[ ]#/{
/"[^"]*#[^"]*"/!s/[ ]#.*$//
}
# one-liner "case ... esac"
/^[ ^]*case[ ]*..*esac/bchkchn
# multi-line "case ... esac"
/^[ ^]*case[ ]..*[ ]in/bcase
# multi-line "for ... done" or "while ... done"
/^[ ^]*for[ ]..*[ ]in/bcont
/^[ ^]*while[ ]/bcont
/^[ ]*do[ ]/bcont
/^[ ]*do[ ]*$/bcont
/;[ ]*do/bcont
/^[ ]*done[ ]*&&[ ]*$/bdone
/^[ ]*done[ ]*$/bdone
/^[ ]*done[ ]*[<>|]/bdone
/^[ ]*done[ ]*)/bdone
/||[ ]*exit[ ]/bcont
/||[ ]*exit[ ]*$/bcont
# multi-line "if...elsif...else...fi"
/^[ ^]*if[ ]/bcont
/^[ ]*then[ ]/bcont
/^[ ]*then[ ]*$/bcont
/;[ ]*then/bcont
/^[ ]*elif[ ]/belse
/^[ ]*elif[ ]*$/belse
/^[ ]*else[ ]/belse
/^[ ]*else[ ]*$/belse
/^[ ]*fi[ ]*&&[ ]*$/bdone
/^[ ]*fi[ ]*$/bdone
/^[ ]*fi[ ]*[<>|]/bdone
/^[ ]*fi[ ]*)/bdone
# nested one-liner "(...) &&"
/^[ ^]*(.*)[ ]*&&[ ]*$/bchkchn
# nested one-liner "(...)"
/^[ ^]*(.*)[ ]*$/bchkchn
# nested one-liner "(...) >x" (or "2>x" or "<x" or "|x")
/^[ ^]*(.*)[ ]*[0-9]*[<>|]/bchkchn
# nested multi-line "(...\n...)"
/^[ ^]*(/bnest
# multi-line "{...\n...}"
/^[ ^]*{/bblock
# closing ")" on own line -- exit subshell
/^[ ]*)/bclssolo
# "$((...))" -- arithmetic expansion; not closing ")"
/\$(([^)][^)]*))[^)]*$/bchkchn
# "$(...)" -- command substitution; not closing ")"
/\$([^)][^)]*)[^)]*$/bchkchn
# multi-line "$(...\n...)" -- command substitution; treat as nested subshell
/\$([^)]*$/bnest
# "=(...)" -- Bash array assignment; not closing ")"
/=(/bchkchn
# closing "...) &&"
/)[ ]*&&[ ]*$/bclose
# closing "...)"
/)[ ]*$/bclose
# closing "...) >x" (or "2>x" or "<x" or "|x")
/)[ ]*[<>|]/bclose
:chkchn
# mark suspect if line uses ";" internally rather than "&&" (but not ";" in a
# string and not ";;" in one-liner "case...esac")
/;/{
/;;/!{
/"[^"]*;[^"]*"/!s/;/; ?!AMP?!/
}
}
# line ends with pipe "...|" -- valid; not missing "&&"
/|[ ]*$/bcont
# missing end-of-line "&&" -- mark suspect
/&&[ ]*$/!s/$/ ?!AMP?!/
:cont
# retrieve and print previous line
x
s/^\([ ]*\)^/\1(/
s/?!HERE?!/<</g
n
bslurp
# found incomplete line "...\" -- slurp up next line
:icmplte
N
s/\\\n//
bslurp
# check for multi-line double-quoted string "...\n..." -- fold to one line
:dqstr
# remove all quote pairs
s/"\([^"]*\)"/@!\1@!/g
# done if no dangling quote
/"/!bdqdone
# otherwise, slurp next line and try again
N
s/\n//
bdqstr
:dqdone
s/@!/"/g
bfolded
# check for multi-line single-quoted string '...\n...' -- fold to one line
:sqstr
# remove all quote pairs
s/'\([^']*\)'/@!\1@!/g
# done if no dangling quote
/'/!bsqdone
# otherwise, slurp next line and try again
N
s/\n//
bsqstr
:sqdone
s/@!/'/g
bfolded
# found here-doc -- swallow it to avoid false hits within its body (but keep
# the command to which it was attached)
:heredoc
s/^\(.*\)<<\(-*[ ]*\)[\\'"]*\([A-Za-z0-9_][A-Za-z0-9_]*\)['"]*/<\3>\1?!HERE?!\2\3/
:hdocsub
N
/^<\([^>]*\)>.*\n[ ]*\1[ ]*$/!{
s/\n.*$//
bhdocsub
}
s/^<[^>]*>//
s/\n.*$//
bfolded
# found "case ... in" -- pass through untouched
:case
x
s/^\([ ]*\)^/\1(/
s/?!HERE?!/<</g
n
:cascom
/^[ ]*#/{
N
s/.*\n//
bcascom
}
/^[ ]*esac/bslurp
bcase
# found "else" or "elif" -- drop "suspect" from final line before "else" since
# that line legitimately lacks "&&"
:else
x
s/\( ?!AMP?!\)* ?!AMP?!$//
x
bcont
# found "done" closing for-loop or while-loop, or "fi" closing if-then -- drop
# "suspect" from final contained line since that line legitimately lacks "&&"
:done
x
s/\( ?!AMP?!\)* ?!AMP?!$//
x
# is 'done' or 'fi' cuddled with ")" to close subshell?
/done.*)/bclose
/fi.*)/bclose
bchkchn
# found nested multi-line "(...\n...)" -- pass through untouched
:nest
x
:nstslrp
s/^\([ ]*\)^/\1(/
s/?!HERE?!/<</g
n
:nstcom
# comment -- not closing ")" if in comment
/^[ ]*#/{
N
s/.*\n//
bnstcom
}
# closing ")" on own line -- stop nested slurp
/^[ ]*)/bnstcl
# "$((...))" -- arithmetic expansion; not closing ")"
/\$(([^)][^)]*))[^)]*$/bnstcnt
# "$(...)" -- command substitution; not closing ")"
/\$([^)][^)]*)[^)]*$/bnstcnt
# closing "...)" -- stop nested slurp
/)/bnstcl
:nstcnt
x
bnstslrp
:nstcl
# is it "))" which closes nested and parent subshells?
/)[ ]*)/bslurp
bchkchn
# found multi-line "{...\n...}" block -- pass through untouched
:block
x
s/^\([ ]*\)^/\1(/
s/?!HERE?!/<</g
n
:blkcom
/^[ ]*#/{
N
s/.*\n//
bblkcom
}
# closing "}" -- stop block slurp
/}/bchkchn
bblock
# found closing ")" on own line -- drop "suspect" from final line of subshell
# since that line legitimately lacks "&&" and exit subshell loop
:clssolo
x
s/\( ?!AMP?!\)* ?!AMP?!$//
s/^\([ ]*\)^/\1(/
s/?!HERE?!/<</g
p
x
s/^\([ ]*\)^/\1(/
s/?!HERE?!/<</g
b
# found closing "...)" -- exit subshell loop
:close
x
s/^\([ ]*\)^/\1(/
s/?!HERE?!/<</g
p
x
s/^\([ ]*\)^/\1(/
s/?!HERE?!/<</g
b

View File

@ -0,0 +1,18 @@
test_done ( ) {
case "$test_failure" in
0 )
test_at_end_hook_
exit 0 ;;
* )
if test $test_external_has_tap -eq 0
then
say_color error "# failed $test_failure among $msg"
say "1..$test_count"
fi
exit 1 ;;
esac
}

View File

@ -0,0 +1,19 @@
# LINT: blank line before "esac"
test_done () {
case "$test_failure" in
0)
test_at_end_hook_
exit 0 ;;
*)
if test $test_external_has_tap -eq 0
then
say_color error "# failed $test_failure among $msg"
say "1..$test_count"
fi
exit 1 ;;
esac
}

View File

@ -1,7 +1,7 @@
(
foo &&
{
echo a
echo a ?!AMP?!
echo b
} &&
bar &&
@ -9,4 +9,15 @@
echo c
} ?!AMP?!
baz
)
) &&
{
echo a ; ?!AMP?! echo b
} &&
{ echo a ; ?!AMP?! echo b ; } &&
{
echo "${var}9" &&
echo "done"
} &&
finis

View File

@ -11,4 +11,17 @@
echo c
}
baz
)
) &&
# LINT: ";" not allowed in place of "&&"
{
echo a; echo b
} &&
{ echo a; echo b; } &&
# LINT: "}" inside string not mistaken as end of block
{
echo "${var}9" &&
echo "done"
} &&
finis

View File

@ -0,0 +1,9 @@
JGIT_DAEMON_PID= &&
git init --bare empty.git &&
> empty.git/git-daemon-export-ok &&
mkfifo jgit_daemon_output &&
{
jgit daemon --port="$JGIT_DAEMON_PORT" . > jgit_daemon_output &
JGIT_DAEMON_PID=$!
} &&
test_expect_code 2 git ls-remote --exit-code git://localhost:$JGIT_DAEMON_PORT/empty.git

View File

@ -0,0 +1,10 @@
JGIT_DAEMON_PID= &&
git init --bare empty.git &&
>empty.git/git-daemon-export-ok &&
mkfifo jgit_daemon_output &&
{
# LINT: exit status of "&" is always 0 so &&-chaining immaterial
jgit daemon --port="$JGIT_DAEMON_PORT" . >jgit_daemon_output &
JGIT_DAEMON_PID=$!
} &&
test_expect_code 2 git ls-remote --exit-code git://localhost:$JGIT_DAEMON_PORT/empty.git

View File

@ -0,0 +1,12 @@
git ls-tree --name-only -r refs/notes/many_notes |
while read path
do
test "$path" = "foobar/non-note.txt" && continue
test "$path" = "deadbeef" && continue
test "$path" = "de/adbeef" && continue
if test $(expr length "$path") -ne $hexsz
then
return 1
fi
done

View File

@ -0,0 +1,13 @@
git ls-tree --name-only -r refs/notes/many_notes |
while read path
do
# LINT: broken &&-chain okay if explicit "continue"
test "$path" = "foobar/non-note.txt" && continue
test "$path" = "deadbeef" && continue
test "$path" = "de/adbeef" && continue
if test $(expr length "$path") -ne $hexsz
then
return 1
fi
done

View File

@ -0,0 +1,9 @@
if condition not satisified
then
echo it did not work...
echo failed!
false
else
echo it went okay ?!AMP?!
congratulate user
fi

View File

@ -0,0 +1,10 @@
# LINT: broken &&-chain okay if explicit "false" signals failure
if condition not satisified
then
echo it did not work...
echo failed!
false
else
echo it went okay
congratulate user
fi

View File

@ -0,0 +1,19 @@
case "$(git ls-files)" in
one ) echo pass one ;;
* ) echo bad one ; return 1 ;;
esac &&
(
case "$(git ls-files)" in
two ) echo pass two ;;
* ) echo bad two ; exit 1 ;;
esac
) &&
case "$(git ls-files)" in
dir/two"$LF"one ) echo pass both ;;
* ) echo bad ; return 1 ;;
esac &&
for i in 1 2 3 4 ; do
git checkout main -b $i || return $?
test_commit $i $i $i tag$i || return $?
done

View File

@ -0,0 +1,23 @@
case "$(git ls-files)" in
one) echo pass one ;;
# LINT: broken &&-chain okay if explicit "return 1" signals failuire
*) echo bad one; return 1 ;;
esac &&
(
case "$(git ls-files)" in
two) echo pass two ;;
# LINT: broken &&-chain okay if explicit "exit 1" signals failuire
*) echo bad two; exit 1 ;;
esac
) &&
case "$(git ls-files)" in
dir/two"$LF"one) echo pass both ;;
# LINT: broken &&-chain okay if explicit "return 1" signals failuire
*) echo bad; return 1 ;;
esac &&
for i in 1 2 3 4 ; do
# LINT: broken &&-chain okay if explicit "return $?" signals failure
git checkout main -b $i || return $?
test_commit $i $i $i tag$i || return $?
done

View File

@ -0,0 +1,9 @@
OUT=$(( ( large_git ; echo $? 1 >& 3 ) | : ) 3 >& 1) &&
test_match_signal 13 "$OUT" &&
{ test-tool sigchain > actual ; ret=$? ; } &&
{
test_match_signal 15 "$ret" ||
test "$ret" = 3
} &&
test_cmp expect actual

View File

@ -0,0 +1,11 @@
# LINT: broken &&-chain okay if next command handles "$?" explicitly
OUT=$( ((large_git; echo $? 1>&3) | :) 3>&1 ) &&
test_match_signal 13 "$OUT" &&
# LINT: broken &&-chain okay if next command handles "$?" explicitly
{ test-tool sigchain >actual; ret=$?; } &&
{
test_match_signal 15 "$ret" ||
test "$ret" = 3
} &&
test_cmp expect actual

View File

@ -0,0 +1,9 @@
echo nobody home && {
test the doohicky ?!AMP?!
right now
} &&
GIT_EXTERNAL_DIFF=echo git diff | {
read path oldfile oldhex oldmode newfile newhex newmode &&
test "z$oh" = "z$oldhex"
}

View File

@ -0,0 +1,11 @@
# LINT: start of block chained to preceding command
echo nobody home && {
test the doohicky
right now
} &&
# LINT: preceding command pipes to block on same line
GIT_EXTERNAL_DIFF=echo git diff | {
read path oldfile oldhex oldmode newfile newhex newmode &&
test "z$oh" = "z$oldhex"
}

View File

@ -0,0 +1,10 @@
mkdir sub && (
cd sub &&
foo the bar ?!AMP?!
nuff said
) &&
cut "-d " -f actual | ( read s1 s2 s3 &&
test -f $s1 ?!AMP?!
test $(cat $s2) = tree2path1 &&
test $(cat $s3) = tree3path1 )

View File

@ -0,0 +1,13 @@
# LINT: start of subshell chained to preceding command
mkdir sub && (
cd sub &&
foo the bar
nuff said
) &&
# LINT: preceding command pipes to subshell on same line
cut "-d " -f actual | (read s1 s2 s3 &&
test -f $s1
test $(cat $s2) = tree2path1 &&
# LINT: closing subshell ")" correctly detected on same line as "$(...)"
test $(cat $s3) = tree3path1)

View File

@ -0,0 +1,2 @@
OUT=$(( ( large_git 1 >& 3 ) | : ) 3 >& 1) &&
test_match_signal 13 "$OUT"

View File

@ -0,0 +1,3 @@
# LINT: subshell nested in subshell nested in command substitution
OUT=$( ((large_git 1>&3) | :) 3>&1 ) &&
test_match_signal 13 "$OUT"

View File

@ -4,6 +4,6 @@
:
else
echo >file
fi
fi ?!LOOP?!
done) &&
test ! -f file

View File

@ -0,0 +1,2 @@
run_sub_test_lib_test_err run-inv-range-start "--run invalid range start" --run="a-5" <<-EOF &&
check_sub_test_lib_test_err run-inv-range-start <<-EOF_OUT 3 <<-EOF_ERR

View File

@ -0,0 +1,12 @@
run_sub_test_lib_test_err run-inv-range-start \
"--run invalid range start" \
--run="a-5" <<-\EOF &&
test_expect_success "passing test #1" "true"
test_done
EOF
check_sub_test_lib_test_err run-inv-range-start \
<<-\EOF_OUT 3<<-EOF_ERR
> FATAL: Unexpected exit with code 1
EOF_OUT
> error: --run: invalid non-numeric in range start: ${SQ}a-5${SQ}
EOF_ERR

View File

@ -0,0 +1,3 @@
echo 'fatal: reword option of --fixup is mutually exclusive with' '--patch/--interactive/--all/--include/--only' > expect &&
test_must_fail git commit --fixup=reword:HEAD~ $1 2 > actual &&
test_cmp expect actual

View File

@ -0,0 +1,7 @@
# LINT: line-splice within DQ-string
'"
echo 'fatal: reword option of --fixup is mutually exclusive with'\
'--patch/--interactive/--all/--include/--only' >expect &&
test_must_fail git commit --fixup=reword:HEAD~ $1 2>actual &&
test_cmp expect actual
"'

View File

@ -0,0 +1,11 @@
grep "^ ! [rejected][ ]*$BRANCH -> $BRANCH (non-fast-forward)$" out &&
grep "^\.git$" output.txt &&
(
cd client$version &&
GIT_TEST_PROTOCOL_VERSION=$version git fetch-pack --no-progress .. $(cat ../input)
) > output &&
cut -d ' ' -f 2 < output | sort > actual &&
test_cmp expect actual

View File

@ -0,0 +1,15 @@
# LINT: regex dollar-sign eol anchor in double-quoted string not special
grep "^ ! \[rejected\][ ]*$BRANCH -> $BRANCH (non-fast-forward)$" out &&
# LINT: escaped "$" not mistaken for variable expansion
grep "^\\.git\$" output.txt &&
'"
(
cd client$version &&
# LINT: escaped dollar-sign in double-quoted test body
GIT_TEST_PROTOCOL_VERSION=$version git fetch-pack --no-progress .. \$(cat ../input)
) >output &&
cut -d ' ' -f 2 <output | sort >actual &&
test_cmp expect actual
"'

View File

@ -0,0 +1,3 @@
git ls-tree $tree path > current &&
cat > expected <<EOF &&
test_output

View File

@ -0,0 +1,5 @@
git ls-tree $tree path >current &&
# LINT: empty here-doc
cat >expected <<\EOF &&
EOF
test_output

View File

@ -0,0 +1,4 @@
if ! condition ; then echo nope ; else yep ; fi &&
test_prerequisite !MINGW &&
mail uucp!address &&
echo !whatever!

View File

@ -0,0 +1,8 @@
# LINT: "! word" is two tokens
if ! condition; then echo nope; else yep; fi &&
# LINT: "!word" is single token, not two tokens "!" and "word"
test_prerequisite !MINGW &&
# LINT: "word!word" is single token, not three tokens "word", "!", and "word"
mail uucp!address &&
# LINT: "!word!" is single token, not three tokens "!", "word", and "!"
echo !whatever!

View File

@ -0,0 +1,5 @@
for it
do
path=$(expr "$it" : ( [^:]*) ) &&
git update-index --add "$path" || exit
done

View File

@ -0,0 +1,6 @@
# LINT: for-loop lacking optional "in [word...]" before "do"
for it
do
path=$(expr "$it" : '\([^:]*\)') &&
git update-index --add "$path" || exit
done

View File

@ -2,10 +2,10 @@
for i in a b c
do
echo $i ?!AMP?!
cat <<-EOF
cat <<-EOF ?!LOOP?!
done ?!AMP?!
for i in a b c; do
echo $i &&
cat $i
cat $i ?!LOOP?!
done
)

View File

@ -0,0 +1,11 @@
sha1_file ( ) {
echo "$*" | sed "s#..#.git/objects/&/#"
} &&
remove_object ( ) {
file=$(sha1_file "$*") &&
test -e "$file" ?!AMP?!
rm -f "$file"
} ?!AMP?!
sha1_file arg && remove_object arg

13
t/chainlint/function.test Normal file
View File

@ -0,0 +1,13 @@
# LINT: "()" in function definition not mistaken for subshell
sha1_file() {
echo "$*" | sed "s#..#.git/objects/&/#"
} &&
# LINT: broken &&-chain in function and after function
remove_object() {
file=$(sha1_file "$*") &&
test -e "$file"
rm -f "$file"
}
sha1_file arg && remove_object arg

View File

@ -0,0 +1,5 @@
cat > expect <<-EOF &&
cat > expect <<-EOF ?!AMP?!
cleanup

View File

@ -0,0 +1,13 @@
# LINT: whitespace between operator "<<-" and tag legal
cat >expect <<- EOF &&
header: 43475048 1 $(test_oid oid_version) $NUM_CHUNKS 0
num_commits: $1
chunks: oid_fanout oid_lookup commit_metadata generation_data bloom_indexes bloom_data
EOF
# LINT: not an indented here-doc; just a plain here-doc with tag named "-EOF"
cat >expect << -EOF
this is not indented
-EOF
cleanup

View File

@ -1,4 +1,5 @@
(
cat <<-TXT && echo "multi-line string" ?!AMP?!
cat <<-TXT && echo "multi-line
string" ?!AMP?!
bap
)

View File

@ -0,0 +1,7 @@
if bob &&
marcia ||
kevin
then
echo "nomads" ?!AMP?!
echo "for sure"
fi

View File

@ -0,0 +1,8 @@
# LINT: "if" condition split across multiple lines at "&&" or "||"
if bob &&
marcia ||
kevin
then
echo "nomads"
echo "for sure"
fi

View File

@ -3,7 +3,7 @@
do
if false
then
echo "err" ?!AMP?!
echo "err"
exit 1
fi ?!AMP?!
foo

View File

@ -3,7 +3,7 @@
do
if false
then
# LINT: missing "&&" on "echo"
# LINT: missing "&&" on "echo" okay since "exit 1" signals error explicitly
echo "err"
exit 1
# LINT: missing "&&" on "fi"

View File

@ -0,0 +1,15 @@
git init r1 &&
for n in 1 2 3 4 5
do
echo "This is file: $n" > r1/file.$n &&
git -C r1 add file.$n &&
git -C r1 commit -m "$n" || return 1
done &&
git init r2 &&
for n in 1000 10000
do
printf "%"$n"s" X > r2/large.$n &&
git -C r2 add large.$n &&
git -C r2 commit -m "$n" ?!LOOP?!
done

View File

@ -0,0 +1,17 @@
git init r1 &&
# LINT: loop handles failure explicitly with "|| return 1"
for n in 1 2 3 4 5
do
echo "This is file: $n" > r1/file.$n &&
git -C r1 add file.$n &&
git -C r1 commit -m "$n" || return 1
done &&
git init r2 &&
# LINT: loop fails to handle failure explicitly with "|| return 1"
for n in 1000 10000
do
printf "%"$n"s" X > r2/large.$n &&
git -C r2 add large.$n &&
git -C r2 commit -m "$n"
done

View File

@ -0,0 +1,18 @@
( while test $i -le $blobcount
do
printf "Generating blob $i/$blobcount\r" >& 2 &&
printf "blob\nmark :$i\ndata $blobsize\n" &&
printf "%-${blobsize}s" $i &&
echo "M 100644 :$i $i" >> commit &&
i=$(($i+1)) ||
echo $? > exit-status
done &&
echo "commit refs/heads/main" &&
echo "author A U Thor <author@email.com> 123456789 +0000" &&
echo "committer C O Mitter <committer@email.com> 123456789 +0000" &&
echo "data 5" &&
echo ">2gb" &&
cat commit ) |
git fast-import --big-file-threshold=2 &&
test ! -f exit-status

View File

@ -0,0 +1,19 @@
# LINT: "$?" handled explicitly within loop body
(while test $i -le $blobcount
do
printf "Generating blob $i/$blobcount\r" >&2 &&
printf "blob\nmark :$i\ndata $blobsize\n" &&
#test-tool genrandom $i $blobsize &&
printf "%-${blobsize}s" $i &&
echo "M 100644 :$i $i" >> commit &&
i=$(($i+1)) ||
echo $? > exit-status
done &&
echo "commit refs/heads/main" &&
echo "author A U Thor <author@email.com> 123456789 +0000" &&
echo "committer C O Mitter <committer@email.com> 123456789 +0000" &&
echo "data 5" &&
echo ">2gb" &&
cat commit) |
git fast-import --big-file-threshold=2 &&
test ! -f exit-status

View File

@ -4,7 +4,7 @@
while true
do
echo "pop" ?!AMP?!
echo "glup"
echo "glup" ?!LOOP?!
done ?!AMP?!
foo
fi ?!AMP?!

View File

@ -0,0 +1,10 @@
(
git rev-list --objects --no-object-names base..loose |
while read oid
do
path="$objdir/$(test_oid_to_path "$oid")" &&
printf "%s %d\n" "$oid" "$(test-tool chmtime --get "$path")" ||
echo "object list generation failed for $oid"
done |
sort -k1
) >expect &&

View File

@ -0,0 +1,11 @@
(
git rev-list --objects --no-object-names base..loose |
while read oid
do
# LINT: "|| echo" signals failure in loop upstream of a pipe
path="$objdir/$(test_oid_to_path "$oid")" &&
printf "%s %d\n" "$oid" "$(test-tool chmtime --get "$path")" ||
echo "object list generation failed for $oid"
done |
sort -k1
) >expect &&

View File

@ -1,9 +1,14 @@
(
x="line 1 line 2 line 3" &&
y="line 1 line2" ?!AMP?!
x="line 1
line 2
line 3" &&
y="line 1
line2" ?!AMP?!
foobar
) &&
(
echo "xyz" "abc def ghi" &&
echo "xyz" "abc
def
ghi" &&
barfoo
)

View File

@ -0,0 +1,31 @@
for i in 0 1 2 3 4 5 6 7 8 9 ;
do
for j in 0 1 2 3 4 5 6 7 8 9 ;
do
echo "$i$j" > "path$i$j" ?!LOOP?!
done ?!LOOP?!
done &&
for i in 0 1 2 3 4 5 6 7 8 9 ;
do
for j in 0 1 2 3 4 5 6 7 8 9 ;
do
echo "$i$j" > "path$i$j" || return 1
done
done &&
for i in 0 1 2 3 4 5 6 7 8 9 ;
do
for j in 0 1 2 3 4 5 6 7 8 9 ;
do
echo "$i$j" > "path$i$j" ?!LOOP?!
done || return 1
done &&
for i in 0 1 2 3 4 5 6 7 8 9 ;
do
for j in 0 1 2 3 4 5 6 7 8 9 ;
do
echo "$i$j" > "path$i$j" || return 1
done || return 1
done

View File

@ -0,0 +1,35 @@
# LINT: neither loop handles failure explicitly with "|| return 1"
for i in 0 1 2 3 4 5 6 7 8 9;
do
for j in 0 1 2 3 4 5 6 7 8 9;
do
echo "$i$j" >"path$i$j"
done
done &&
# LINT: inner loop handles failure explicitly with "|| return 1"
for i in 0 1 2 3 4 5 6 7 8 9;
do
for j in 0 1 2 3 4 5 6 7 8 9;
do
echo "$i$j" >"path$i$j" || return 1
done
done &&
# LINT: outer loop handles failure explicitly with "|| return 1"
for i in 0 1 2 3 4 5 6 7 8 9;
do
for j in 0 1 2 3 4 5 6 7 8 9;
do
echo "$i$j" >"path$i$j"
done || return 1
done &&
# LINT: inner & outer loops handles failure explicitly with "|| return 1"
for i in 0 1 2 3 4 5 6 7 8 9;
do
for j in 0 1 2 3 4 5 6 7 8 9;
do
echo "$i$j" >"path$i$j" || return 1
done || return 1
done

View File

@ -6,7 +6,7 @@
) >file &&
cd foo &&
(
echo a
echo a ?!AMP?!
echo b
) >file
)

View File

@ -0,0 +1,9 @@
git init dir-rename-and-content &&
(
cd dir-rename-and-content &&
test_write_lines 1 2 3 4 5 >foo &&
mkdir olddir &&
for i in a b c; do echo $i >olddir/$i; ?!LOOP?! done ?!AMP?!
git add foo olddir &&
git commit -m "original" &&
)

View File

@ -0,0 +1,10 @@
git init dir-rename-and-content &&
(
cd dir-rename-and-content &&
test_write_lines 1 2 3 4 5 >foo &&
mkdir olddir &&
# LINT: one-liner for-loop missing "|| exit"; also broken &&-chain
for i in a b c; do echo $i >olddir/$i; done
git add foo olddir &&
git commit -m "original" &&
)

View File

@ -0,0 +1,5 @@
while test $i -lt $((num - 5))
do
git notes add -m "notes for commit$i" HEAD~$i || return 1
i=$((i + 1))
done

View File

@ -0,0 +1,6 @@
while test $i -lt $((num - 5))
do
# LINT: "|| return {n}" valid loop escape outside subshell; no "&&" needed
git notes add -m "notes for commit$i" HEAD~$i || return 1
i=$((i + 1))
done

View File

@ -15,5 +15,5 @@
) &&
(cd foo &&
for i in a b c; do
echo;
echo; ?!LOOP?!
done)

View File

@ -0,0 +1,4 @@
perl -e '
defined($_ = -s $_) or die for @ARGV;
exit 1 if $ARGV[0] <= $ARGV[1];
' test-2-$packname_2.pack test-3-$packname_3.pack

View File

@ -0,0 +1,5 @@
# LINT: SQ-string Perl code fragment within SQ-string
perl -e '\''
defined($_ = -s $_) or die for @ARGV;
exit 1 if $ARGV[0] <= $ARGV[1];
'\'' test-2-$packname_2.pack test-3-$packname_3.pack

View File

@ -1,10 +1,17 @@
(
chks="sub1sub2sub3sub4" &&
chks="sub1
sub2
sub3
sub4" &&
chks_sub=$(cat <<TXT | sed "s,^,sub dir/,"
) &&
chkms="main-sub1main-sub2main-sub3main-sub4" &&
chkms="main-sub1
main-sub2
main-sub3
main-sub4" &&
chkms_sub=$(cat <<TXT | sed "s,^,sub dir/,"
) &&
subfiles=$(git ls-files) &&
check_equal "$subfiles" "$chkms$chks"
check_equal "$subfiles" "$chkms
$chks"
)

View File

@ -0,0 +1,27 @@
git config filter.rot13.smudge ./rot13.sh &&
git config filter.rot13.clean ./rot13.sh &&
{
echo "*.t filter=rot13" ?!AMP?!
echo "*.i ident"
} > .gitattributes &&
{
echo a b c d e f g h i j k l m ?!AMP?!
echo n o p q r s t u v w x y z ?!AMP?!
echo '$Id$'
} > test &&
cat test > test.t &&
cat test > test.o &&
cat test > test.i &&
git add test test.t test.i &&
rm -f test test.t test.i &&
git checkout -- test test.t test.i &&
echo "content-test2" > test2.o &&
echo "content-test3 - filename with special characters" > "test3 'sq',$x=.o" ?!AMP?!
downstream_url_for_sed=$(
printf "%sn" "$downstream_url" |
sed -e 's/\/\\/g' -e 's/[[/.*^$]/\&/g'
)

View File

@ -0,0 +1,32 @@
# LINT: single token; composite of multiple strings
git config filter.rot13.smudge ./rot13.sh &&
git config filter.rot13.clean ./rot13.sh &&
{
echo "*.t filter=rot13"
echo "*.i ident"
} >.gitattributes &&
{
echo a b c d e f g h i j k l m
echo n o p q r s t u v w x y z
# LINT: exit/enter string context and escaped-quote outside of string
echo '\''$Id$'\''
} >test &&
cat test >test.t &&
cat test >test.o &&
cat test >test.i &&
git add test test.t test.i &&
rm -f test test.t test.i &&
git checkout -- test test.t test.i &&
echo "content-test2" >test2.o &&
# LINT: exit/enter string context and escaped-quote outside of string
echo "content-test3 - filename with special characters" >"test3 '\''sq'\'',\$x=.o"
# LINT: single token; composite of multiple strings
downstream_url_for_sed=$(
printf "%s\n" "$downstream_url" |
# LINT: exit/enter string context; "&" inside string not command terminator
sed -e '\''s/\\/\\\\/g'\'' -e '\''s/[[/.*^$]/\\&/g'\''
)

View File

@ -2,10 +2,10 @@
while true
do
echo foo ?!AMP?!
cat <<-EOF
cat <<-EOF ?!LOOP?!
done ?!AMP?!
while true; do
echo foo &&
cat bar
cat bar ?!LOOP?!
done
)

View File

@ -95,6 +95,10 @@ You can set the following variables (also in your config.mak):
Git (e.g., performance of index-pack as the number of threads
changes). These can be enabled with GIT_PERF_EXTRA.
GIT_PERF_USE_SCALAR
Boolean indicating whether to register test repo(s) with Scalar
before executing tests.
You can also pass the options taken by ordinary git tests; the most
useful one is:

39
t/perf/p9210-scalar.sh Executable file
View File

@ -0,0 +1,39 @@
#!/bin/sh
test_description='test scalar performance'
. ./perf-lib.sh
test_perf_large_repo "$TRASH_DIRECTORY/to-clone"
test_expect_success 'enable server-side partial clone' '
git -C to-clone config uploadpack.allowFilter true &&
git -C to-clone config uploadpack.allowAnySHA1InWant true
'
test_perf 'scalar clone' '
rm -rf scalar-clone &&
scalar clone "file://$(pwd)/to-clone" scalar-clone
'
test_perf 'git clone' '
rm -rf git-clone &&
git clone "file://$(pwd)/to-clone" git-clone
'
test_compare_perf () {
command=$1
shift
args=$*
test_perf "$command $args (scalar)" "
$command -C scalar-clone/src $args
"
test_perf "$command $args (non-scalar)" "
$command -C git-clone $args
"
}
test_compare_perf git status
test_compare_perf test_commit --append --no-tag A
test_done

View File

@ -49,6 +49,9 @@ export TEST_DIRECTORY TRASH_DIRECTORY GIT_BUILD_DIR GIT_TEST_CMP
MODERN_GIT=$GIT_BUILD_DIR/bin-wrappers/git
export MODERN_GIT
MODERN_SCALAR=$GIT_BUILD_DIR/bin-wrappers/scalar
export MODERN_SCALAR
perf_results_dir=$TEST_RESULTS_DIR
test -n "$GIT_PERF_SUBSECTION" && perf_results_dir="$perf_results_dir/$GIT_PERF_SUBSECTION"
mkdir -p "$perf_results_dir"
@ -120,6 +123,10 @@ test_perf_create_repo_from () {
# status" due to a locked index. Since we have
# a copy it's fine to remove the lock.
rm .git/index.lock
fi &&
if test_bool_env GIT_PERF_USE_SCALAR false
then
"$MODERN_SCALAR" register
fi
) || error "failed to copy repository '$source' to '$repo'"
}
@ -130,7 +137,11 @@ test_perf_fresh_repo () {
"$MODERN_GIT" init -q "$repo" &&
(
cd "$repo" &&
test_perf_do_repo_symlink_config_
test_perf_do_repo_symlink_config_ &&
if test_bool_env GIT_PERF_USE_SCALAR false
then
"$MODERN_SCALAR" register
fi
)
}

View File

@ -171,6 +171,9 @@ run_subsection () {
get_var_from_env_or_config "GIT_PERF_MAKE_COMMAND" "perf" "makeCommand"
get_var_from_env_or_config "GIT_PERF_MAKE_OPTS" "perf" "makeOpts"
get_var_from_env_or_config "GIT_PERF_USE_SCALAR" "perf" "useScalar" "--bool"
export GIT_PERF_USE_SCALAR
get_var_from_env_or_config "GIT_PERF_REPO_NAME" "perf" "repoName"
export GIT_PERF_REPO_NAME

View File

@ -387,9 +387,7 @@ test_expect_success 'setup main' '
test_tick
'
# Disable extra chain-linting for the next set of tests. There are many
# auto-generated ones that are not worth checking over and over.
GIT_TEST_CHAIN_LINT_HARDER_DEFAULT=0
warn_LF_CRLF="LF will be replaced by CRLF"
warn_CRLF_LF="CRLF will be replaced by LF"
@ -606,9 +604,6 @@ do
checkout_files "" "$id" "crlf" true "" CRLF CRLF CRLF CRLF_mix_CR CRLF_nul
done
# The rest of the tests are unique; do the usual linting.
unset GIT_TEST_CHAIN_LINT_HARDER_DEFAULT
# Should be the last test case: remove some files from the worktree
test_expect_success 'ls-files --eol -d -z' '
rm crlf_false_attr__CRLF.txt crlf_false_attr__CRLF_mix_LF.txt crlf_false_attr__LF.txt .gitattributes &&

Some files were not shown because too many files have changed in this diff Show More