git-commit-vandalism/builtin-rm.c

272 lines
7.0 KiB
C
Raw Normal View History

Add builtin "git rm" command This changes semantics very subtly, because it adds a new atomicity guarantee. In particular, if you "git rm" several files, it will now do all or nothing. The old shell-script really looped over the removed files one by one, and would basically randomly fail in the middle if "-f" was used and one of the files didn't exist in the working directory. This C builtin one will not re-write the index after each remove, but instead remove all files at once. However, that means that if "-f" is used (to also force removal of the file from the working directory), and some files have already been removed from the workspace, it won't stop in the middle in some half-way state like the old one did. So what happens is that if the _first_ file fails to be removed with "-f", we abort the whole "git rm". But once we've started removing, we don't leave anything half done. If some of the other files don't exist, we'll just ignore errors of removal from the working tree. This is only an issue with "-f", of course. I think the new behaviour is strictly an improvement, but perhaps more importantly, it is _different_. As a special case, the semantics are identical for the single-file case (which is the only one our test-suite seems to test). The other question is what to do with leading directories. The old "git rm" script didn't do anything, which is somewhat inconsistent. This one will actually clean up directories that have become empty as a result of removing the last file, but maybe we want to have a flag to decide the behaviour? Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 01:19:34 +02:00
/*
* "git rm" builtin command
*
* Copyright (C) Linus Torvalds 2006
*/
#include "cache.h"
#include "builtin.h"
#include "dir.h"
#include "cache-tree.h"
git-rm: update to saner semantics This updates the "git rm" command with saner semantics suggested on the list earlier with: Message-ID: <Pine.LNX.4.64.0612020919400.3476@woody.osdl.org> Message-ID: <Pine.LNX.4.64.0612040737120.3476@woody.osdl.org> The command still validates that the given paths all talk about sensible paths to avoid mistakes (e.g. "git rm fiel" when file "fiel" does not exist would error out -- user meant to remove "file"), and it has further safety checks described next. The biggest difference is that the paths are removed from both index and from the working tree (if you have an exotic need to remove paths only from the index, you can use the --cached option). The command refuses to remove if the copy on the working tree does not match the index, or if the index and the HEAD does not match. You can defeat this check with -f option. This safety check has two exceptions: if the working tree file does not exist to begin with, that technically does not match the index but it is allowed. This is to allow this CVS style command sequence: rm <path> && git rm <path> Also if the index is unmerged at the <path>, you can use "git rm <path>" to declare that the result of the merge loses that path, and the above safety check does not trigger; requiring the file to match the index in this case forces the user to do "git update-index file && git rm file", which is just crazy. To recursively remove all contents from a directory, you need to pass -r option, not just the directory name as the <path>. Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-25 12:11:00 +01:00
#include "tree-walk.h"
#include "parse-options.h"
Add builtin "git rm" command This changes semantics very subtly, because it adds a new atomicity guarantee. In particular, if you "git rm" several files, it will now do all or nothing. The old shell-script really looped over the removed files one by one, and would basically randomly fail in the middle if "-f" was used and one of the files didn't exist in the working directory. This C builtin one will not re-write the index after each remove, but instead remove all files at once. However, that means that if "-f" is used (to also force removal of the file from the working directory), and some files have already been removed from the workspace, it won't stop in the middle in some half-way state like the old one did. So what happens is that if the _first_ file fails to be removed with "-f", we abort the whole "git rm". But once we've started removing, we don't leave anything half done. If some of the other files don't exist, we'll just ignore errors of removal from the working tree. This is only an issue with "-f", of course. I think the new behaviour is strictly an improvement, but perhaps more importantly, it is _different_. As a special case, the semantics are identical for the single-file case (which is the only one our test-suite seems to test). The other question is what to do with leading directories. The old "git rm" script didn't do anything, which is somewhat inconsistent. This one will actually clean up directories that have become empty as a result of removing the last file, but maybe we want to have a flag to decide the behaviour? Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 01:19:34 +02:00
static const char * const builtin_rm_usage[] = {
"git rm [options] [--] <file>...",
NULL
};
Add builtin "git rm" command This changes semantics very subtly, because it adds a new atomicity guarantee. In particular, if you "git rm" several files, it will now do all or nothing. The old shell-script really looped over the removed files one by one, and would basically randomly fail in the middle if "-f" was used and one of the files didn't exist in the working directory. This C builtin one will not re-write the index after each remove, but instead remove all files at once. However, that means that if "-f" is used (to also force removal of the file from the working directory), and some files have already been removed from the workspace, it won't stop in the middle in some half-way state like the old one did. So what happens is that if the _first_ file fails to be removed with "-f", we abort the whole "git rm". But once we've started removing, we don't leave anything half done. If some of the other files don't exist, we'll just ignore errors of removal from the working tree. This is only an issue with "-f", of course. I think the new behaviour is strictly an improvement, but perhaps more importantly, it is _different_. As a special case, the semantics are identical for the single-file case (which is the only one our test-suite seems to test). The other question is what to do with leading directories. The old "git rm" script didn't do anything, which is somewhat inconsistent. This one will actually clean up directories that have become empty as a result of removing the last file, but maybe we want to have a flag to decide the behaviour? Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 01:19:34 +02:00
static struct {
int nr, alloc;
const char **name;
} list;
static void add_list(const char *name)
{
if (list.nr >= list.alloc) {
list.alloc = alloc_nr(list.alloc);
list.name = xrealloc(list.name, list.alloc * sizeof(const char *));
}
list.name[list.nr++] = name;
}
static int check_local_mod(unsigned char *head, int index_only)
git-rm: update to saner semantics This updates the "git rm" command with saner semantics suggested on the list earlier with: Message-ID: <Pine.LNX.4.64.0612020919400.3476@woody.osdl.org> Message-ID: <Pine.LNX.4.64.0612040737120.3476@woody.osdl.org> The command still validates that the given paths all talk about sensible paths to avoid mistakes (e.g. "git rm fiel" when file "fiel" does not exist would error out -- user meant to remove "file"), and it has further safety checks described next. The biggest difference is that the paths are removed from both index and from the working tree (if you have an exotic need to remove paths only from the index, you can use the --cached option). The command refuses to remove if the copy on the working tree does not match the index, or if the index and the HEAD does not match. You can defeat this check with -f option. This safety check has two exceptions: if the working tree file does not exist to begin with, that technically does not match the index but it is allowed. This is to allow this CVS style command sequence: rm <path> && git rm <path> Also if the index is unmerged at the <path>, you can use "git rm <path>" to declare that the result of the merge loses that path, and the above safety check does not trigger; requiring the file to match the index in this case forces the user to do "git update-index file && git rm file", which is just crazy. To recursively remove all contents from a directory, you need to pass -r option, not just the directory name as the <path>. Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-25 12:11:00 +01:00
{
/*
* Items in list are already sorted in the cache order,
git-rm: update to saner semantics This updates the "git rm" command with saner semantics suggested on the list earlier with: Message-ID: <Pine.LNX.4.64.0612020919400.3476@woody.osdl.org> Message-ID: <Pine.LNX.4.64.0612040737120.3476@woody.osdl.org> The command still validates that the given paths all talk about sensible paths to avoid mistakes (e.g. "git rm fiel" when file "fiel" does not exist would error out -- user meant to remove "file"), and it has further safety checks described next. The biggest difference is that the paths are removed from both index and from the working tree (if you have an exotic need to remove paths only from the index, you can use the --cached option). The command refuses to remove if the copy on the working tree does not match the index, or if the index and the HEAD does not match. You can defeat this check with -f option. This safety check has two exceptions: if the working tree file does not exist to begin with, that technically does not match the index but it is allowed. This is to allow this CVS style command sequence: rm <path> && git rm <path> Also if the index is unmerged at the <path>, you can use "git rm <path>" to declare that the result of the merge loses that path, and the above safety check does not trigger; requiring the file to match the index in this case forces the user to do "git update-index file && git rm file", which is just crazy. To recursively remove all contents from a directory, you need to pass -r option, not just the directory name as the <path>. Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-25 12:11:00 +01:00
* so we could do this a lot more efficiently by using
* tree_desc based traversal if we wanted to, but I am
* lazy, and who cares if removal of files is a tad
* slower than the theoretical maximum speed?
*/
int i, no_head;
int errs = 0;
no_head = is_null_sha1(head);
for (i = 0; i < list.nr; i++) {
struct stat st;
int pos;
struct cache_entry *ce;
const char *name = list.name[i];
unsigned char sha1[20];
unsigned mode;
int local_changes = 0;
int staged_changes = 0;
git-rm: update to saner semantics This updates the "git rm" command with saner semantics suggested on the list earlier with: Message-ID: <Pine.LNX.4.64.0612020919400.3476@woody.osdl.org> Message-ID: <Pine.LNX.4.64.0612040737120.3476@woody.osdl.org> The command still validates that the given paths all talk about sensible paths to avoid mistakes (e.g. "git rm fiel" when file "fiel" does not exist would error out -- user meant to remove "file"), and it has further safety checks described next. The biggest difference is that the paths are removed from both index and from the working tree (if you have an exotic need to remove paths only from the index, you can use the --cached option). The command refuses to remove if the copy on the working tree does not match the index, or if the index and the HEAD does not match. You can defeat this check with -f option. This safety check has two exceptions: if the working tree file does not exist to begin with, that technically does not match the index but it is allowed. This is to allow this CVS style command sequence: rm <path> && git rm <path> Also if the index is unmerged at the <path>, you can use "git rm <path>" to declare that the result of the merge loses that path, and the above safety check does not trigger; requiring the file to match the index in this case forces the user to do "git update-index file && git rm file", which is just crazy. To recursively remove all contents from a directory, you need to pass -r option, not just the directory name as the <path>. Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-25 12:11:00 +01:00
pos = cache_name_pos(name, strlen(name));
if (pos < 0)
continue; /* removing unmerged entry */
ce = active_cache[pos];
if (lstat(ce->name, &st) < 0) {
if (errno != ENOENT)
warning("'%s': %s", ce->name, strerror(errno));
git-rm: update to saner semantics This updates the "git rm" command with saner semantics suggested on the list earlier with: Message-ID: <Pine.LNX.4.64.0612020919400.3476@woody.osdl.org> Message-ID: <Pine.LNX.4.64.0612040737120.3476@woody.osdl.org> The command still validates that the given paths all talk about sensible paths to avoid mistakes (e.g. "git rm fiel" when file "fiel" does not exist would error out -- user meant to remove "file"), and it has further safety checks described next. The biggest difference is that the paths are removed from both index and from the working tree (if you have an exotic need to remove paths only from the index, you can use the --cached option). The command refuses to remove if the copy on the working tree does not match the index, or if the index and the HEAD does not match. You can defeat this check with -f option. This safety check has two exceptions: if the working tree file does not exist to begin with, that technically does not match the index but it is allowed. This is to allow this CVS style command sequence: rm <path> && git rm <path> Also if the index is unmerged at the <path>, you can use "git rm <path>" to declare that the result of the merge loses that path, and the above safety check does not trigger; requiring the file to match the index in this case forces the user to do "git update-index file && git rm file", which is just crazy. To recursively remove all contents from a directory, you need to pass -r option, not just the directory name as the <path>. Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-25 12:11:00 +01:00
/* It already vanished from the working tree */
continue;
}
else if (S_ISDIR(st.st_mode)) {
/* if a file was removed and it is now a
* directory, that is the same as ENOENT as
* far as git is concerned; we do not track
* directories.
*/
continue;
}
/*
* "rm" of a path that has changes need to be treated
* carefully not to allow losing local changes
* accidentally. A local change could be (1) file in
* work tree is different since the index; and/or (2)
* the user staged a content that is different from
* the current commit in the index.
*
* In such a case, you would need to --force the
* removal. However, "rm --cached" (remove only from
* the index) is safe if the index matches the file in
* the work tree or the HEAD commit, as it means that
* the content being removed is available elsewhere.
*/
/*
* Is the index different from the file in the work tree?
*/
git-rm: update to saner semantics This updates the "git rm" command with saner semantics suggested on the list earlier with: Message-ID: <Pine.LNX.4.64.0612020919400.3476@woody.osdl.org> Message-ID: <Pine.LNX.4.64.0612040737120.3476@woody.osdl.org> The command still validates that the given paths all talk about sensible paths to avoid mistakes (e.g. "git rm fiel" when file "fiel" does not exist would error out -- user meant to remove "file"), and it has further safety checks described next. The biggest difference is that the paths are removed from both index and from the working tree (if you have an exotic need to remove paths only from the index, you can use the --cached option). The command refuses to remove if the copy on the working tree does not match the index, or if the index and the HEAD does not match. You can defeat this check with -f option. This safety check has two exceptions: if the working tree file does not exist to begin with, that technically does not match the index but it is allowed. This is to allow this CVS style command sequence: rm <path> && git rm <path> Also if the index is unmerged at the <path>, you can use "git rm <path>" to declare that the result of the merge loses that path, and the above safety check does not trigger; requiring the file to match the index in this case forces the user to do "git update-index file && git rm file", which is just crazy. To recursively remove all contents from a directory, you need to pass -r option, not just the directory name as the <path>. Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-25 12:11:00 +01:00
if (ce_match_stat(ce, &st, 0))
local_changes = 1;
/*
* Is the index different from the HEAD commit? By
* definition, before the very initial commit,
* anything staged in the index is treated by the same
* way as changed from the HEAD.
*/
if (no_head
|| get_tree_entry(head, name, sha1, &mode)
|| ce->ce_mode != create_ce_mode(mode)
|| hashcmp(ce->sha1, sha1))
staged_changes = 1;
/*
* If the index does not match the file in the work
* tree and if it does not match the HEAD commit
* either, (1) "git rm" without --cached definitely
* will lose information; (2) "git rm --cached" will
* lose information unless it is about removing an
* "intent to add" entry.
*/
if (local_changes && staged_changes) {
if (!index_only || !(ce->ce_flags & CE_INTENT_TO_ADD))
errs = error("'%s' has staged content different "
"from both the file and the HEAD\n"
"(use -f to force removal)", name);
}
else if (!index_only) {
if (staged_changes)
errs = error("'%s' has changes staged in the index\n"
"(use --cached to keep the file, "
"or -f to force removal)", name);
if (local_changes)
errs = error("'%s' has local modifications\n"
"(use --cached to keep the file, "
"or -f to force removal)", name);
}
git-rm: update to saner semantics This updates the "git rm" command with saner semantics suggested on the list earlier with: Message-ID: <Pine.LNX.4.64.0612020919400.3476@woody.osdl.org> Message-ID: <Pine.LNX.4.64.0612040737120.3476@woody.osdl.org> The command still validates that the given paths all talk about sensible paths to avoid mistakes (e.g. "git rm fiel" when file "fiel" does not exist would error out -- user meant to remove "file"), and it has further safety checks described next. The biggest difference is that the paths are removed from both index and from the working tree (if you have an exotic need to remove paths only from the index, you can use the --cached option). The command refuses to remove if the copy on the working tree does not match the index, or if the index and the HEAD does not match. You can defeat this check with -f option. This safety check has two exceptions: if the working tree file does not exist to begin with, that technically does not match the index but it is allowed. This is to allow this CVS style command sequence: rm <path> && git rm <path> Also if the index is unmerged at the <path>, you can use "git rm <path>" to declare that the result of the merge loses that path, and the above safety check does not trigger; requiring the file to match the index in this case forces the user to do "git update-index file && git rm file", which is just crazy. To recursively remove all contents from a directory, you need to pass -r option, not just the directory name as the <path>. Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-25 12:11:00 +01:00
}
return errs;
}
static struct lock_file lock_file;
Add builtin "git rm" command This changes semantics very subtly, because it adds a new atomicity guarantee. In particular, if you "git rm" several files, it will now do all or nothing. The old shell-script really looped over the removed files one by one, and would basically randomly fail in the middle if "-f" was used and one of the files didn't exist in the working directory. This C builtin one will not re-write the index after each remove, but instead remove all files at once. However, that means that if "-f" is used (to also force removal of the file from the working directory), and some files have already been removed from the workspace, it won't stop in the middle in some half-way state like the old one did. So what happens is that if the _first_ file fails to be removed with "-f", we abort the whole "git rm". But once we've started removing, we don't leave anything half done. If some of the other files don't exist, we'll just ignore errors of removal from the working tree. This is only an issue with "-f", of course. I think the new behaviour is strictly an improvement, but perhaps more importantly, it is _different_. As a special case, the semantics are identical for the single-file case (which is the only one our test-suite seems to test). The other question is what to do with leading directories. The old "git rm" script didn't do anything, which is somewhat inconsistent. This one will actually clean up directories that have become empty as a result of removing the last file, but maybe we want to have a flag to decide the behaviour? Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 01:19:34 +02:00
static int show_only = 0, force = 0, index_only = 0, recursive = 0, quiet = 0;
static int ignore_unmatch = 0;
static struct option builtin_rm_options[] = {
OPT__DRY_RUN(&show_only),
OPT__QUIET(&quiet),
OPT_BOOLEAN( 0 , "cached", &index_only, "only remove from the index"),
OPT_BOOLEAN('f', "force", &force, "override the up-to-date check"),
OPT_BOOLEAN('r', NULL, &recursive, "allow recursive removal"),
OPT_BOOLEAN( 0 , "ignore-unmatch", &ignore_unmatch,
"exit with a zero status even if nothing matched"),
OPT_END(),
};
int cmd_rm(int argc, const char **argv, const char *prefix)
Add builtin "git rm" command This changes semantics very subtly, because it adds a new atomicity guarantee. In particular, if you "git rm" several files, it will now do all or nothing. The old shell-script really looped over the removed files one by one, and would basically randomly fail in the middle if "-f" was used and one of the files didn't exist in the working directory. This C builtin one will not re-write the index after each remove, but instead remove all files at once. However, that means that if "-f" is used (to also force removal of the file from the working directory), and some files have already been removed from the workspace, it won't stop in the middle in some half-way state like the old one did. So what happens is that if the _first_ file fails to be removed with "-f", we abort the whole "git rm". But once we've started removing, we don't leave anything half done. If some of the other files don't exist, we'll just ignore errors of removal from the working tree. This is only an issue with "-f", of course. I think the new behaviour is strictly an improvement, but perhaps more importantly, it is _different_. As a special case, the semantics are identical for the single-file case (which is the only one our test-suite seems to test). The other question is what to do with leading directories. The old "git rm" script didn't do anything, which is somewhat inconsistent. This one will actually clean up directories that have become empty as a result of removing the last file, but maybe we want to have a flag to decide the behaviour? Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 01:19:34 +02:00
{
int i, newfd;
const char **pathspec;
char *seen;
git_config(git_default_config, NULL);
Add builtin "git rm" command This changes semantics very subtly, because it adds a new atomicity guarantee. In particular, if you "git rm" several files, it will now do all or nothing. The old shell-script really looped over the removed files one by one, and would basically randomly fail in the middle if "-f" was used and one of the files didn't exist in the working directory. This C builtin one will not re-write the index after each remove, but instead remove all files at once. However, that means that if "-f" is used (to also force removal of the file from the working directory), and some files have already been removed from the workspace, it won't stop in the middle in some half-way state like the old one did. So what happens is that if the _first_ file fails to be removed with "-f", we abort the whole "git rm". But once we've started removing, we don't leave anything half done. If some of the other files don't exist, we'll just ignore errors of removal from the working tree. This is only an issue with "-f", of course. I think the new behaviour is strictly an improvement, but perhaps more importantly, it is _different_. As a special case, the semantics are identical for the single-file case (which is the only one our test-suite seems to test). The other question is what to do with leading directories. The old "git rm" script didn't do anything, which is somewhat inconsistent. This one will actually clean up directories that have become empty as a result of removing the last file, but maybe we want to have a flag to decide the behaviour? Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 01:19:34 +02:00
argc = parse_options(argc, argv, prefix, builtin_rm_options,
builtin_rm_usage, 0);
if (!argc)
usage_with_options(builtin_rm_usage, builtin_rm_options);
Add builtin "git rm" command This changes semantics very subtly, because it adds a new atomicity guarantee. In particular, if you "git rm" several files, it will now do all or nothing. The old shell-script really looped over the removed files one by one, and would basically randomly fail in the middle if "-f" was used and one of the files didn't exist in the working directory. This C builtin one will not re-write the index after each remove, but instead remove all files at once. However, that means that if "-f" is used (to also force removal of the file from the working directory), and some files have already been removed from the workspace, it won't stop in the middle in some half-way state like the old one did. So what happens is that if the _first_ file fails to be removed with "-f", we abort the whole "git rm". But once we've started removing, we don't leave anything half done. If some of the other files don't exist, we'll just ignore errors of removal from the working tree. This is only an issue with "-f", of course. I think the new behaviour is strictly an improvement, but perhaps more importantly, it is _different_. As a special case, the semantics are identical for the single-file case (which is the only one our test-suite seems to test). The other question is what to do with leading directories. The old "git rm" script didn't do anything, which is somewhat inconsistent. This one will actually clean up directories that have become empty as a result of removing the last file, but maybe we want to have a flag to decide the behaviour? Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 01:19:34 +02:00
if (!index_only)
setup_work_tree();
newfd = hold_locked_index(&lock_file, 1);
if (read_cache() < 0)
die("index file corrupt");
refresh_cache(REFRESH_QUIET);
pathspec = get_pathspec(prefix, argv);
Add builtin "git rm" command This changes semantics very subtly, because it adds a new atomicity guarantee. In particular, if you "git rm" several files, it will now do all or nothing. The old shell-script really looped over the removed files one by one, and would basically randomly fail in the middle if "-f" was used and one of the files didn't exist in the working directory. This C builtin one will not re-write the index after each remove, but instead remove all files at once. However, that means that if "-f" is used (to also force removal of the file from the working directory), and some files have already been removed from the workspace, it won't stop in the middle in some half-way state like the old one did. So what happens is that if the _first_ file fails to be removed with "-f", we abort the whole "git rm". But once we've started removing, we don't leave anything half done. If some of the other files don't exist, we'll just ignore errors of removal from the working tree. This is only an issue with "-f", of course. I think the new behaviour is strictly an improvement, but perhaps more importantly, it is _different_. As a special case, the semantics are identical for the single-file case (which is the only one our test-suite seems to test). The other question is what to do with leading directories. The old "git rm" script didn't do anything, which is somewhat inconsistent. This one will actually clean up directories that have become empty as a result of removing the last file, but maybe we want to have a flag to decide the behaviour? Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 01:19:34 +02:00
seen = NULL;
for (i = 0; pathspec[i] ; i++)
/* nothing */;
seen = xcalloc(i, 1);
Add builtin "git rm" command This changes semantics very subtly, because it adds a new atomicity guarantee. In particular, if you "git rm" several files, it will now do all or nothing. The old shell-script really looped over the removed files one by one, and would basically randomly fail in the middle if "-f" was used and one of the files didn't exist in the working directory. This C builtin one will not re-write the index after each remove, but instead remove all files at once. However, that means that if "-f" is used (to also force removal of the file from the working directory), and some files have already been removed from the workspace, it won't stop in the middle in some half-way state like the old one did. So what happens is that if the _first_ file fails to be removed with "-f", we abort the whole "git rm". But once we've started removing, we don't leave anything half done. If some of the other files don't exist, we'll just ignore errors of removal from the working tree. This is only an issue with "-f", of course. I think the new behaviour is strictly an improvement, but perhaps more importantly, it is _different_. As a special case, the semantics are identical for the single-file case (which is the only one our test-suite seems to test). The other question is what to do with leading directories. The old "git rm" script didn't do anything, which is somewhat inconsistent. This one will actually clean up directories that have become empty as a result of removing the last file, but maybe we want to have a flag to decide the behaviour? Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 01:19:34 +02:00
for (i = 0; i < active_nr; i++) {
struct cache_entry *ce = active_cache[i];
if (!match_pathspec(pathspec, ce->name, ce_namelen(ce), 0, seen))
continue;
add_list(ce->name);
}
if (pathspec) {
const char *match;
int seen_any = 0;
Add builtin "git rm" command This changes semantics very subtly, because it adds a new atomicity guarantee. In particular, if you "git rm" several files, it will now do all or nothing. The old shell-script really looped over the removed files one by one, and would basically randomly fail in the middle if "-f" was used and one of the files didn't exist in the working directory. This C builtin one will not re-write the index after each remove, but instead remove all files at once. However, that means that if "-f" is used (to also force removal of the file from the working directory), and some files have already been removed from the workspace, it won't stop in the middle in some half-way state like the old one did. So what happens is that if the _first_ file fails to be removed with "-f", we abort the whole "git rm". But once we've started removing, we don't leave anything half done. If some of the other files don't exist, we'll just ignore errors of removal from the working tree. This is only an issue with "-f", of course. I think the new behaviour is strictly an improvement, but perhaps more importantly, it is _different_. As a special case, the semantics are identical for the single-file case (which is the only one our test-suite seems to test). The other question is what to do with leading directories. The old "git rm" script didn't do anything, which is somewhat inconsistent. This one will actually clean up directories that have become empty as a result of removing the last file, but maybe we want to have a flag to decide the behaviour? Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 01:19:34 +02:00
for (i = 0; (match = pathspec[i]) != NULL ; i++) {
if (!seen[i]) {
if (!ignore_unmatch) {
die("pathspec '%s' did not match any files",
match);
}
}
else {
seen_any = 1;
}
git-rm: update to saner semantics This updates the "git rm" command with saner semantics suggested on the list earlier with: Message-ID: <Pine.LNX.4.64.0612020919400.3476@woody.osdl.org> Message-ID: <Pine.LNX.4.64.0612040737120.3476@woody.osdl.org> The command still validates that the given paths all talk about sensible paths to avoid mistakes (e.g. "git rm fiel" when file "fiel" does not exist would error out -- user meant to remove "file"), and it has further safety checks described next. The biggest difference is that the paths are removed from both index and from the working tree (if you have an exotic need to remove paths only from the index, you can use the --cached option). The command refuses to remove if the copy on the working tree does not match the index, or if the index and the HEAD does not match. You can defeat this check with -f option. This safety check has two exceptions: if the working tree file does not exist to begin with, that technically does not match the index but it is allowed. This is to allow this CVS style command sequence: rm <path> && git rm <path> Also if the index is unmerged at the <path>, you can use "git rm <path>" to declare that the result of the merge loses that path, and the above safety check does not trigger; requiring the file to match the index in this case forces the user to do "git update-index file && git rm file", which is just crazy. To recursively remove all contents from a directory, you need to pass -r option, not just the directory name as the <path>. Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-25 12:11:00 +01:00
if (!recursive && seen[i] == MATCHED_RECURSIVELY)
die("not removing '%s' recursively without -r",
*match ? match : ".");
Add builtin "git rm" command This changes semantics very subtly, because it adds a new atomicity guarantee. In particular, if you "git rm" several files, it will now do all or nothing. The old shell-script really looped over the removed files one by one, and would basically randomly fail in the middle if "-f" was used and one of the files didn't exist in the working directory. This C builtin one will not re-write the index after each remove, but instead remove all files at once. However, that means that if "-f" is used (to also force removal of the file from the working directory), and some files have already been removed from the workspace, it won't stop in the middle in some half-way state like the old one did. So what happens is that if the _first_ file fails to be removed with "-f", we abort the whole "git rm". But once we've started removing, we don't leave anything half done. If some of the other files don't exist, we'll just ignore errors of removal from the working tree. This is only an issue with "-f", of course. I think the new behaviour is strictly an improvement, but perhaps more importantly, it is _different_. As a special case, the semantics are identical for the single-file case (which is the only one our test-suite seems to test). The other question is what to do with leading directories. The old "git rm" script didn't do anything, which is somewhat inconsistent. This one will actually clean up directories that have become empty as a result of removing the last file, but maybe we want to have a flag to decide the behaviour? Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 01:19:34 +02:00
}
if (! seen_any)
exit(0);
Add builtin "git rm" command This changes semantics very subtly, because it adds a new atomicity guarantee. In particular, if you "git rm" several files, it will now do all or nothing. The old shell-script really looped over the removed files one by one, and would basically randomly fail in the middle if "-f" was used and one of the files didn't exist in the working directory. This C builtin one will not re-write the index after each remove, but instead remove all files at once. However, that means that if "-f" is used (to also force removal of the file from the working directory), and some files have already been removed from the workspace, it won't stop in the middle in some half-way state like the old one did. So what happens is that if the _first_ file fails to be removed with "-f", we abort the whole "git rm". But once we've started removing, we don't leave anything half done. If some of the other files don't exist, we'll just ignore errors of removal from the working tree. This is only an issue with "-f", of course. I think the new behaviour is strictly an improvement, but perhaps more importantly, it is _different_. As a special case, the semantics are identical for the single-file case (which is the only one our test-suite seems to test). The other question is what to do with leading directories. The old "git rm" script didn't do anything, which is somewhat inconsistent. This one will actually clean up directories that have become empty as a result of removing the last file, but maybe we want to have a flag to decide the behaviour? Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 01:19:34 +02:00
}
git-rm: update to saner semantics This updates the "git rm" command with saner semantics suggested on the list earlier with: Message-ID: <Pine.LNX.4.64.0612020919400.3476@woody.osdl.org> Message-ID: <Pine.LNX.4.64.0612040737120.3476@woody.osdl.org> The command still validates that the given paths all talk about sensible paths to avoid mistakes (e.g. "git rm fiel" when file "fiel" does not exist would error out -- user meant to remove "file"), and it has further safety checks described next. The biggest difference is that the paths are removed from both index and from the working tree (if you have an exotic need to remove paths only from the index, you can use the --cached option). The command refuses to remove if the copy on the working tree does not match the index, or if the index and the HEAD does not match. You can defeat this check with -f option. This safety check has two exceptions: if the working tree file does not exist to begin with, that technically does not match the index but it is allowed. This is to allow this CVS style command sequence: rm <path> && git rm <path> Also if the index is unmerged at the <path>, you can use "git rm <path>" to declare that the result of the merge loses that path, and the above safety check does not trigger; requiring the file to match the index in this case forces the user to do "git update-index file && git rm file", which is just crazy. To recursively remove all contents from a directory, you need to pass -r option, not just the directory name as the <path>. Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-25 12:11:00 +01:00
/*
* If not forced, the file, the index and the HEAD (if exists)
* must match; but the file can already been removed, since
* this sequence is a natural "novice" way:
*
* rm F; git rm F
git-rm: update to saner semantics This updates the "git rm" command with saner semantics suggested on the list earlier with: Message-ID: <Pine.LNX.4.64.0612020919400.3476@woody.osdl.org> Message-ID: <Pine.LNX.4.64.0612040737120.3476@woody.osdl.org> The command still validates that the given paths all talk about sensible paths to avoid mistakes (e.g. "git rm fiel" when file "fiel" does not exist would error out -- user meant to remove "file"), and it has further safety checks described next. The biggest difference is that the paths are removed from both index and from the working tree (if you have an exotic need to remove paths only from the index, you can use the --cached option). The command refuses to remove if the copy on the working tree does not match the index, or if the index and the HEAD does not match. You can defeat this check with -f option. This safety check has two exceptions: if the working tree file does not exist to begin with, that technically does not match the index but it is allowed. This is to allow this CVS style command sequence: rm <path> && git rm <path> Also if the index is unmerged at the <path>, you can use "git rm <path>" to declare that the result of the merge loses that path, and the above safety check does not trigger; requiring the file to match the index in this case forces the user to do "git update-index file && git rm file", which is just crazy. To recursively remove all contents from a directory, you need to pass -r option, not just the directory name as the <path>. Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-25 12:11:00 +01:00
*
* Further, if HEAD commit exists, "diff-index --cached" must
* report no changes unless forced.
*/
if (!force) {
unsigned char sha1[20];
if (get_sha1("HEAD", sha1))
hashclr(sha1);
if (check_local_mod(sha1, index_only))
git-rm: update to saner semantics This updates the "git rm" command with saner semantics suggested on the list earlier with: Message-ID: <Pine.LNX.4.64.0612020919400.3476@woody.osdl.org> Message-ID: <Pine.LNX.4.64.0612040737120.3476@woody.osdl.org> The command still validates that the given paths all talk about sensible paths to avoid mistakes (e.g. "git rm fiel" when file "fiel" does not exist would error out -- user meant to remove "file"), and it has further safety checks described next. The biggest difference is that the paths are removed from both index and from the working tree (if you have an exotic need to remove paths only from the index, you can use the --cached option). The command refuses to remove if the copy on the working tree does not match the index, or if the index and the HEAD does not match. You can defeat this check with -f option. This safety check has two exceptions: if the working tree file does not exist to begin with, that technically does not match the index but it is allowed. This is to allow this CVS style command sequence: rm <path> && git rm <path> Also if the index is unmerged at the <path>, you can use "git rm <path>" to declare that the result of the merge loses that path, and the above safety check does not trigger; requiring the file to match the index in this case forces the user to do "git update-index file && git rm file", which is just crazy. To recursively remove all contents from a directory, you need to pass -r option, not just the directory name as the <path>. Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-25 12:11:00 +01:00
exit(1);
}
Add builtin "git rm" command This changes semantics very subtly, because it adds a new atomicity guarantee. In particular, if you "git rm" several files, it will now do all or nothing. The old shell-script really looped over the removed files one by one, and would basically randomly fail in the middle if "-f" was used and one of the files didn't exist in the working directory. This C builtin one will not re-write the index after each remove, but instead remove all files at once. However, that means that if "-f" is used (to also force removal of the file from the working directory), and some files have already been removed from the workspace, it won't stop in the middle in some half-way state like the old one did. So what happens is that if the _first_ file fails to be removed with "-f", we abort the whole "git rm". But once we've started removing, we don't leave anything half done. If some of the other files don't exist, we'll just ignore errors of removal from the working tree. This is only an issue with "-f", of course. I think the new behaviour is strictly an improvement, but perhaps more importantly, it is _different_. As a special case, the semantics are identical for the single-file case (which is the only one our test-suite seems to test). The other question is what to do with leading directories. The old "git rm" script didn't do anything, which is somewhat inconsistent. This one will actually clean up directories that have become empty as a result of removing the last file, but maybe we want to have a flag to decide the behaviour? Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 01:19:34 +02:00
/*
* First remove the names from the index: we won't commit
git-rm: update to saner semantics This updates the "git rm" command with saner semantics suggested on the list earlier with: Message-ID: <Pine.LNX.4.64.0612020919400.3476@woody.osdl.org> Message-ID: <Pine.LNX.4.64.0612040737120.3476@woody.osdl.org> The command still validates that the given paths all talk about sensible paths to avoid mistakes (e.g. "git rm fiel" when file "fiel" does not exist would error out -- user meant to remove "file"), and it has further safety checks described next. The biggest difference is that the paths are removed from both index and from the working tree (if you have an exotic need to remove paths only from the index, you can use the --cached option). The command refuses to remove if the copy on the working tree does not match the index, or if the index and the HEAD does not match. You can defeat this check with -f option. This safety check has two exceptions: if the working tree file does not exist to begin with, that technically does not match the index but it is allowed. This is to allow this CVS style command sequence: rm <path> && git rm <path> Also if the index is unmerged at the <path>, you can use "git rm <path>" to declare that the result of the merge loses that path, and the above safety check does not trigger; requiring the file to match the index in this case forces the user to do "git update-index file && git rm file", which is just crazy. To recursively remove all contents from a directory, you need to pass -r option, not just the directory name as the <path>. Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-25 12:11:00 +01:00
* the index unless all of them succeed.
Add builtin "git rm" command This changes semantics very subtly, because it adds a new atomicity guarantee. In particular, if you "git rm" several files, it will now do all or nothing. The old shell-script really looped over the removed files one by one, and would basically randomly fail in the middle if "-f" was used and one of the files didn't exist in the working directory. This C builtin one will not re-write the index after each remove, but instead remove all files at once. However, that means that if "-f" is used (to also force removal of the file from the working directory), and some files have already been removed from the workspace, it won't stop in the middle in some half-way state like the old one did. So what happens is that if the _first_ file fails to be removed with "-f", we abort the whole "git rm". But once we've started removing, we don't leave anything half done. If some of the other files don't exist, we'll just ignore errors of removal from the working tree. This is only an issue with "-f", of course. I think the new behaviour is strictly an improvement, but perhaps more importantly, it is _different_. As a special case, the semantics are identical for the single-file case (which is the only one our test-suite seems to test). The other question is what to do with leading directories. The old "git rm" script didn't do anything, which is somewhat inconsistent. This one will actually clean up directories that have become empty as a result of removing the last file, but maybe we want to have a flag to decide the behaviour? Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 01:19:34 +02:00
*/
for (i = 0; i < list.nr; i++) {
const char *path = list.name[i];
if (!quiet)
printf("rm '%s'\n", path);
Add builtin "git rm" command This changes semantics very subtly, because it adds a new atomicity guarantee. In particular, if you "git rm" several files, it will now do all or nothing. The old shell-script really looped over the removed files one by one, and would basically randomly fail in the middle if "-f" was used and one of the files didn't exist in the working directory. This C builtin one will not re-write the index after each remove, but instead remove all files at once. However, that means that if "-f" is used (to also force removal of the file from the working directory), and some files have already been removed from the workspace, it won't stop in the middle in some half-way state like the old one did. So what happens is that if the _first_ file fails to be removed with "-f", we abort the whole "git rm". But once we've started removing, we don't leave anything half done. If some of the other files don't exist, we'll just ignore errors of removal from the working tree. This is only an issue with "-f", of course. I think the new behaviour is strictly an improvement, but perhaps more importantly, it is _different_. As a special case, the semantics are identical for the single-file case (which is the only one our test-suite seems to test). The other question is what to do with leading directories. The old "git rm" script didn't do anything, which is somewhat inconsistent. This one will actually clean up directories that have become empty as a result of removing the last file, but maybe we want to have a flag to decide the behaviour? Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 01:19:34 +02:00
if (remove_file_from_cache(path))
die("git rm: unable to remove %s", path);
Add builtin "git rm" command This changes semantics very subtly, because it adds a new atomicity guarantee. In particular, if you "git rm" several files, it will now do all or nothing. The old shell-script really looped over the removed files one by one, and would basically randomly fail in the middle if "-f" was used and one of the files didn't exist in the working directory. This C builtin one will not re-write the index after each remove, but instead remove all files at once. However, that means that if "-f" is used (to also force removal of the file from the working directory), and some files have already been removed from the workspace, it won't stop in the middle in some half-way state like the old one did. So what happens is that if the _first_ file fails to be removed with "-f", we abort the whole "git rm". But once we've started removing, we don't leave anything half done. If some of the other files don't exist, we'll just ignore errors of removal from the working tree. This is only an issue with "-f", of course. I think the new behaviour is strictly an improvement, but perhaps more importantly, it is _different_. As a special case, the semantics are identical for the single-file case (which is the only one our test-suite seems to test). The other question is what to do with leading directories. The old "git rm" script didn't do anything, which is somewhat inconsistent. This one will actually clean up directories that have become empty as a result of removing the last file, but maybe we want to have a flag to decide the behaviour? Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 01:19:34 +02:00
}
if (show_only)
return 0;
Add builtin "git rm" command This changes semantics very subtly, because it adds a new atomicity guarantee. In particular, if you "git rm" several files, it will now do all or nothing. The old shell-script really looped over the removed files one by one, and would basically randomly fail in the middle if "-f" was used and one of the files didn't exist in the working directory. This C builtin one will not re-write the index after each remove, but instead remove all files at once. However, that means that if "-f" is used (to also force removal of the file from the working directory), and some files have already been removed from the workspace, it won't stop in the middle in some half-way state like the old one did. So what happens is that if the _first_ file fails to be removed with "-f", we abort the whole "git rm". But once we've started removing, we don't leave anything half done. If some of the other files don't exist, we'll just ignore errors of removal from the working tree. This is only an issue with "-f", of course. I think the new behaviour is strictly an improvement, but perhaps more importantly, it is _different_. As a special case, the semantics are identical for the single-file case (which is the only one our test-suite seems to test). The other question is what to do with leading directories. The old "git rm" script didn't do anything, which is somewhat inconsistent. This one will actually clean up directories that have become empty as a result of removing the last file, but maybe we want to have a flag to decide the behaviour? Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 01:19:34 +02:00
/*
* Then, unless we used "--cached", remove the filenames from
git-rm: update to saner semantics This updates the "git rm" command with saner semantics suggested on the list earlier with: Message-ID: <Pine.LNX.4.64.0612020919400.3476@woody.osdl.org> Message-ID: <Pine.LNX.4.64.0612040737120.3476@woody.osdl.org> The command still validates that the given paths all talk about sensible paths to avoid mistakes (e.g. "git rm fiel" when file "fiel" does not exist would error out -- user meant to remove "file"), and it has further safety checks described next. The biggest difference is that the paths are removed from both index and from the working tree (if you have an exotic need to remove paths only from the index, you can use the --cached option). The command refuses to remove if the copy on the working tree does not match the index, or if the index and the HEAD does not match. You can defeat this check with -f option. This safety check has two exceptions: if the working tree file does not exist to begin with, that technically does not match the index but it is allowed. This is to allow this CVS style command sequence: rm <path> && git rm <path> Also if the index is unmerged at the <path>, you can use "git rm <path>" to declare that the result of the merge loses that path, and the above safety check does not trigger; requiring the file to match the index in this case forces the user to do "git update-index file && git rm file", which is just crazy. To recursively remove all contents from a directory, you need to pass -r option, not just the directory name as the <path>. Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-25 12:11:00 +01:00
* the workspace. If we fail to remove the first one, we
Add builtin "git rm" command This changes semantics very subtly, because it adds a new atomicity guarantee. In particular, if you "git rm" several files, it will now do all or nothing. The old shell-script really looped over the removed files one by one, and would basically randomly fail in the middle if "-f" was used and one of the files didn't exist in the working directory. This C builtin one will not re-write the index after each remove, but instead remove all files at once. However, that means that if "-f" is used (to also force removal of the file from the working directory), and some files have already been removed from the workspace, it won't stop in the middle in some half-way state like the old one did. So what happens is that if the _first_ file fails to be removed with "-f", we abort the whole "git rm". But once we've started removing, we don't leave anything half done. If some of the other files don't exist, we'll just ignore errors of removal from the working tree. This is only an issue with "-f", of course. I think the new behaviour is strictly an improvement, but perhaps more importantly, it is _different_. As a special case, the semantics are identical for the single-file case (which is the only one our test-suite seems to test). The other question is what to do with leading directories. The old "git rm" script didn't do anything, which is somewhat inconsistent. This one will actually clean up directories that have become empty as a result of removing the last file, but maybe we want to have a flag to decide the behaviour? Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 01:19:34 +02:00
* abort the "git rm" (but once we've successfully removed
* any file at all, we'll go ahead and commit to it all:
* by then we've already committed ourselves and can't fail
Add builtin "git rm" command This changes semantics very subtly, because it adds a new atomicity guarantee. In particular, if you "git rm" several files, it will now do all or nothing. The old shell-script really looped over the removed files one by one, and would basically randomly fail in the middle if "-f" was used and one of the files didn't exist in the working directory. This C builtin one will not re-write the index after each remove, but instead remove all files at once. However, that means that if "-f" is used (to also force removal of the file from the working directory), and some files have already been removed from the workspace, it won't stop in the middle in some half-way state like the old one did. So what happens is that if the _first_ file fails to be removed with "-f", we abort the whole "git rm". But once we've started removing, we don't leave anything half done. If some of the other files don't exist, we'll just ignore errors of removal from the working tree. This is only an issue with "-f", of course. I think the new behaviour is strictly an improvement, but perhaps more importantly, it is _different_. As a special case, the semantics are identical for the single-file case (which is the only one our test-suite seems to test). The other question is what to do with leading directories. The old "git rm" script didn't do anything, which is somewhat inconsistent. This one will actually clean up directories that have become empty as a result of removing the last file, but maybe we want to have a flag to decide the behaviour? Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 01:19:34 +02:00
* in the middle)
*/
git-rm: update to saner semantics This updates the "git rm" command with saner semantics suggested on the list earlier with: Message-ID: <Pine.LNX.4.64.0612020919400.3476@woody.osdl.org> Message-ID: <Pine.LNX.4.64.0612040737120.3476@woody.osdl.org> The command still validates that the given paths all talk about sensible paths to avoid mistakes (e.g. "git rm fiel" when file "fiel" does not exist would error out -- user meant to remove "file"), and it has further safety checks described next. The biggest difference is that the paths are removed from both index and from the working tree (if you have an exotic need to remove paths only from the index, you can use the --cached option). The command refuses to remove if the copy on the working tree does not match the index, or if the index and the HEAD does not match. You can defeat this check with -f option. This safety check has two exceptions: if the working tree file does not exist to begin with, that technically does not match the index but it is allowed. This is to allow this CVS style command sequence: rm <path> && git rm <path> Also if the index is unmerged at the <path>, you can use "git rm <path>" to declare that the result of the merge loses that path, and the above safety check does not trigger; requiring the file to match the index in this case forces the user to do "git update-index file && git rm file", which is just crazy. To recursively remove all contents from a directory, you need to pass -r option, not just the directory name as the <path>. Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-25 12:11:00 +01:00
if (!index_only) {
Add builtin "git rm" command This changes semantics very subtly, because it adds a new atomicity guarantee. In particular, if you "git rm" several files, it will now do all or nothing. The old shell-script really looped over the removed files one by one, and would basically randomly fail in the middle if "-f" was used and one of the files didn't exist in the working directory. This C builtin one will not re-write the index after each remove, but instead remove all files at once. However, that means that if "-f" is used (to also force removal of the file from the working directory), and some files have already been removed from the workspace, it won't stop in the middle in some half-way state like the old one did. So what happens is that if the _first_ file fails to be removed with "-f", we abort the whole "git rm". But once we've started removing, we don't leave anything half done. If some of the other files don't exist, we'll just ignore errors of removal from the working tree. This is only an issue with "-f", of course. I think the new behaviour is strictly an improvement, but perhaps more importantly, it is _different_. As a special case, the semantics are identical for the single-file case (which is the only one our test-suite seems to test). The other question is what to do with leading directories. The old "git rm" script didn't do anything, which is somewhat inconsistent. This one will actually clean up directories that have become empty as a result of removing the last file, but maybe we want to have a flag to decide the behaviour? Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 01:19:34 +02:00
int removed = 0;
for (i = 0; i < list.nr; i++) {
const char *path = list.name[i];
if (!remove_path(path)) {
Add builtin "git rm" command This changes semantics very subtly, because it adds a new atomicity guarantee. In particular, if you "git rm" several files, it will now do all or nothing. The old shell-script really looped over the removed files one by one, and would basically randomly fail in the middle if "-f" was used and one of the files didn't exist in the working directory. This C builtin one will not re-write the index after each remove, but instead remove all files at once. However, that means that if "-f" is used (to also force removal of the file from the working directory), and some files have already been removed from the workspace, it won't stop in the middle in some half-way state like the old one did. So what happens is that if the _first_ file fails to be removed with "-f", we abort the whole "git rm". But once we've started removing, we don't leave anything half done. If some of the other files don't exist, we'll just ignore errors of removal from the working tree. This is only an issue with "-f", of course. I think the new behaviour is strictly an improvement, but perhaps more importantly, it is _different_. As a special case, the semantics are identical for the single-file case (which is the only one our test-suite seems to test). The other question is what to do with leading directories. The old "git rm" script didn't do anything, which is somewhat inconsistent. This one will actually clean up directories that have become empty as a result of removing the last file, but maybe we want to have a flag to decide the behaviour? Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 01:19:34 +02:00
removed = 1;
continue;
}
if (!removed)
die("git rm: %s: %s", path, strerror(errno));
Add builtin "git rm" command This changes semantics very subtly, because it adds a new atomicity guarantee. In particular, if you "git rm" several files, it will now do all or nothing. The old shell-script really looped over the removed files one by one, and would basically randomly fail in the middle if "-f" was used and one of the files didn't exist in the working directory. This C builtin one will not re-write the index after each remove, but instead remove all files at once. However, that means that if "-f" is used (to also force removal of the file from the working directory), and some files have already been removed from the workspace, it won't stop in the middle in some half-way state like the old one did. So what happens is that if the _first_ file fails to be removed with "-f", we abort the whole "git rm". But once we've started removing, we don't leave anything half done. If some of the other files don't exist, we'll just ignore errors of removal from the working tree. This is only an issue with "-f", of course. I think the new behaviour is strictly an improvement, but perhaps more importantly, it is _different_. As a special case, the semantics are identical for the single-file case (which is the only one our test-suite seems to test). The other question is what to do with leading directories. The old "git rm" script didn't do anything, which is somewhat inconsistent. This one will actually clean up directories that have become empty as a result of removing the last file, but maybe we want to have a flag to decide the behaviour? Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 01:19:34 +02:00
}
}
if (active_cache_changed) {
if (write_cache(newfd, active_cache, active_nr) ||
commit_locked_index(&lock_file))
Add builtin "git rm" command This changes semantics very subtly, because it adds a new atomicity guarantee. In particular, if you "git rm" several files, it will now do all or nothing. The old shell-script really looped over the removed files one by one, and would basically randomly fail in the middle if "-f" was used and one of the files didn't exist in the working directory. This C builtin one will not re-write the index after each remove, but instead remove all files at once. However, that means that if "-f" is used (to also force removal of the file from the working directory), and some files have already been removed from the workspace, it won't stop in the middle in some half-way state like the old one did. So what happens is that if the _first_ file fails to be removed with "-f", we abort the whole "git rm". But once we've started removing, we don't leave anything half done. If some of the other files don't exist, we'll just ignore errors of removal from the working tree. This is only an issue with "-f", of course. I think the new behaviour is strictly an improvement, but perhaps more importantly, it is _different_. As a special case, the semantics are identical for the single-file case (which is the only one our test-suite seems to test). The other question is what to do with leading directories. The old "git rm" script didn't do anything, which is somewhat inconsistent. This one will actually clean up directories that have become empty as a result of removing the last file, but maybe we want to have a flag to decide the behaviour? Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 01:19:34 +02:00
die("Unable to write new index file");
}
return 0;
}