git-commit-vandalism/fsck.c

1070 lines
28 KiB
C
Raw Normal View History

#include "cache.h"
#include "object.h"
#include "blob.h"
#include "tree.h"
#include "tree-walk.h"
#include "commit.h"
#include "tag.h"
#include "fsck.h"
#include "refs.h"
#include "utf8.h"
#include "sha1-array.h"
#include "decorate.h"
fsck: detect gitmodules files In preparation for performing fsck checks on .gitmodules files, this commit plumbs in the actual detection of the files. Note that unlike most other fsck checks, this cannot be a property of a single object: we must know that the object is found at a ".gitmodules" path at the root tree of a commit. Since the fsck code only sees one object at a time, we have to mark the related objects to fit the puzzle together. When we see a commit we mark its tree as a root tree, and when we see a root tree with a .gitmodules file, we mark the corresponding blob to be checked. In an ideal world, we'd check the objects in topological order: commits followed by trees followed by blobs. In that case we can avoid ever loading an object twice, since all markings would be complete by the time we get to the marked objects. And indeed, if we are checking a single packfile, this is the order in which Git will generally write the objects. But we can't count on that: 1. git-fsck may show us the objects in arbitrary order (loose objects are fed in sha1 order, but we may also have multiple packs, and we process each pack fully in sequence). 2. The type ordering is just what git-pack-objects happens to write now. The pack format does not require a specific order, and it's possible that future versions of Git (or a custom version trying to fool official Git's fsck checks!) may order it differently. 3. We may not even be fscking all of the relevant objects at once. Consider pushing with transfer.fsckObjects, where one push adds a blob at path "foo", and then a second push adds the same blob at path ".gitmodules". The blob is not part of the second push at all, but we need to mark and check it. So in the general case, we need to make up to three passes over the objects: once to make sure we've seen all commits, then once to cover any trees we might have missed, and then a final pass to cover any .gitmodules blobs we found in the second pass. We can simplify things a bit by loosening the requirement that we find .gitmodules only at root trees. Technically a file like "subdir/.gitmodules" is not parsed by Git, but it's not unreasonable for us to declare that Git is aware of all ".gitmodules" files and make them eligible for checking. That lets us drop the root-tree requirement, which eliminates one pass entirely. And it makes our worst case much better: instead of potentially queueing every root tree to be re-examined, the worst case is that we queue each unique .gitmodules blob for a second look. This patch just adds the boilerplate to find .gitmodules files. The actual content checks will come in a subsequent commit. Signed-off-by: Jeff King <peff@peff.net>
2018-05-02 23:20:08 +02:00
#include "oidset.h"
#include "packfile.h"
#include "submodule-config.h"
#include "config.h"
fsck: detect gitmodules files In preparation for performing fsck checks on .gitmodules files, this commit plumbs in the actual detection of the files. Note that unlike most other fsck checks, this cannot be a property of a single object: we must know that the object is found at a ".gitmodules" path at the root tree of a commit. Since the fsck code only sees one object at a time, we have to mark the related objects to fit the puzzle together. When we see a commit we mark its tree as a root tree, and when we see a root tree with a .gitmodules file, we mark the corresponding blob to be checked. In an ideal world, we'd check the objects in topological order: commits followed by trees followed by blobs. In that case we can avoid ever loading an object twice, since all markings would be complete by the time we get to the marked objects. And indeed, if we are checking a single packfile, this is the order in which Git will generally write the objects. But we can't count on that: 1. git-fsck may show us the objects in arbitrary order (loose objects are fed in sha1 order, but we may also have multiple packs, and we process each pack fully in sequence). 2. The type ordering is just what git-pack-objects happens to write now. The pack format does not require a specific order, and it's possible that future versions of Git (or a custom version trying to fool official Git's fsck checks!) may order it differently. 3. We may not even be fscking all of the relevant objects at once. Consider pushing with transfer.fsckObjects, where one push adds a blob at path "foo", and then a second push adds the same blob at path ".gitmodules". The blob is not part of the second push at all, but we need to mark and check it. So in the general case, we need to make up to three passes over the objects: once to make sure we've seen all commits, then once to cover any trees we might have missed, and then a final pass to cover any .gitmodules blobs we found in the second pass. We can simplify things a bit by loosening the requirement that we find .gitmodules only at root trees. Technically a file like "subdir/.gitmodules" is not parsed by Git, but it's not unreasonable for us to declare that Git is aware of all ".gitmodules" files and make them eligible for checking. That lets us drop the root-tree requirement, which eliminates one pass entirely. And it makes our worst case much better: instead of potentially queueing every root tree to be re-examined, the worst case is that we queue each unique .gitmodules blob for a second look. This patch just adds the boilerplate to find .gitmodules files. The actual content checks will come in a subsequent commit. Signed-off-by: Jeff King <peff@peff.net>
2018-05-02 23:20:08 +02:00
static struct oidset gitmodules_found = OIDSET_INIT;
static struct oidset gitmodules_done = OIDSET_INIT;
#define FSCK_FATAL -1
#define FSCK_INFO -2
#define FOREACH_MSG_ID(FUNC) \
/* fatal errors */ \
FUNC(NUL_IN_HEADER, FATAL) \
FUNC(UNTERMINATED_HEADER, FATAL) \
/* errors */ \
FUNC(BAD_DATE, ERROR) \
FUNC(BAD_DATE_OVERFLOW, ERROR) \
FUNC(BAD_EMAIL, ERROR) \
FUNC(BAD_NAME, ERROR) \
FUNC(BAD_OBJECT_SHA1, ERROR) \
FUNC(BAD_PARENT_SHA1, ERROR) \
FUNC(BAD_TAG_OBJECT, ERROR) \
FUNC(BAD_TIMEZONE, ERROR) \
FUNC(BAD_TREE, ERROR) \
FUNC(BAD_TREE_SHA1, ERROR) \
FUNC(BAD_TYPE, ERROR) \
FUNC(DUPLICATE_ENTRIES, ERROR) \
FUNC(MISSING_AUTHOR, ERROR) \
FUNC(MISSING_COMMITTER, ERROR) \
FUNC(MISSING_EMAIL, ERROR) \
FUNC(MISSING_GRAFT, ERROR) \
FUNC(MISSING_NAME_BEFORE_EMAIL, ERROR) \
FUNC(MISSING_OBJECT, ERROR) \
FUNC(MISSING_PARENT, ERROR) \
FUNC(MISSING_SPACE_BEFORE_DATE, ERROR) \
FUNC(MISSING_SPACE_BEFORE_EMAIL, ERROR) \
FUNC(MISSING_TAG, ERROR) \
FUNC(MISSING_TAG_ENTRY, ERROR) \
FUNC(MISSING_TAG_OBJECT, ERROR) \
FUNC(MISSING_TREE, ERROR) \
fsck: detect gitmodules files In preparation for performing fsck checks on .gitmodules files, this commit plumbs in the actual detection of the files. Note that unlike most other fsck checks, this cannot be a property of a single object: we must know that the object is found at a ".gitmodules" path at the root tree of a commit. Since the fsck code only sees one object at a time, we have to mark the related objects to fit the puzzle together. When we see a commit we mark its tree as a root tree, and when we see a root tree with a .gitmodules file, we mark the corresponding blob to be checked. In an ideal world, we'd check the objects in topological order: commits followed by trees followed by blobs. In that case we can avoid ever loading an object twice, since all markings would be complete by the time we get to the marked objects. And indeed, if we are checking a single packfile, this is the order in which Git will generally write the objects. But we can't count on that: 1. git-fsck may show us the objects in arbitrary order (loose objects are fed in sha1 order, but we may also have multiple packs, and we process each pack fully in sequence). 2. The type ordering is just what git-pack-objects happens to write now. The pack format does not require a specific order, and it's possible that future versions of Git (or a custom version trying to fool official Git's fsck checks!) may order it differently. 3. We may not even be fscking all of the relevant objects at once. Consider pushing with transfer.fsckObjects, where one push adds a blob at path "foo", and then a second push adds the same blob at path ".gitmodules". The blob is not part of the second push at all, but we need to mark and check it. So in the general case, we need to make up to three passes over the objects: once to make sure we've seen all commits, then once to cover any trees we might have missed, and then a final pass to cover any .gitmodules blobs we found in the second pass. We can simplify things a bit by loosening the requirement that we find .gitmodules only at root trees. Technically a file like "subdir/.gitmodules" is not parsed by Git, but it's not unreasonable for us to declare that Git is aware of all ".gitmodules" files and make them eligible for checking. That lets us drop the root-tree requirement, which eliminates one pass entirely. And it makes our worst case much better: instead of potentially queueing every root tree to be re-examined, the worst case is that we queue each unique .gitmodules blob for a second look. This patch just adds the boilerplate to find .gitmodules files. The actual content checks will come in a subsequent commit. Signed-off-by: Jeff King <peff@peff.net>
2018-05-02 23:20:08 +02:00
FUNC(MISSING_TREE_OBJECT, ERROR) \
FUNC(MISSING_TYPE, ERROR) \
FUNC(MISSING_TYPE_ENTRY, ERROR) \
FUNC(MULTIPLE_AUTHORS, ERROR) \
FUNC(TAG_OBJECT_NOT_TAG, ERROR) \
FUNC(TREE_NOT_SORTED, ERROR) \
FUNC(UNKNOWN_TYPE, ERROR) \
FUNC(ZERO_PADDED_DATE, ERROR) \
fsck: detect gitmodules files In preparation for performing fsck checks on .gitmodules files, this commit plumbs in the actual detection of the files. Note that unlike most other fsck checks, this cannot be a property of a single object: we must know that the object is found at a ".gitmodules" path at the root tree of a commit. Since the fsck code only sees one object at a time, we have to mark the related objects to fit the puzzle together. When we see a commit we mark its tree as a root tree, and when we see a root tree with a .gitmodules file, we mark the corresponding blob to be checked. In an ideal world, we'd check the objects in topological order: commits followed by trees followed by blobs. In that case we can avoid ever loading an object twice, since all markings would be complete by the time we get to the marked objects. And indeed, if we are checking a single packfile, this is the order in which Git will generally write the objects. But we can't count on that: 1. git-fsck may show us the objects in arbitrary order (loose objects are fed in sha1 order, but we may also have multiple packs, and we process each pack fully in sequence). 2. The type ordering is just what git-pack-objects happens to write now. The pack format does not require a specific order, and it's possible that future versions of Git (or a custom version trying to fool official Git's fsck checks!) may order it differently. 3. We may not even be fscking all of the relevant objects at once. Consider pushing with transfer.fsckObjects, where one push adds a blob at path "foo", and then a second push adds the same blob at path ".gitmodules". The blob is not part of the second push at all, but we need to mark and check it. So in the general case, we need to make up to three passes over the objects: once to make sure we've seen all commits, then once to cover any trees we might have missed, and then a final pass to cover any .gitmodules blobs we found in the second pass. We can simplify things a bit by loosening the requirement that we find .gitmodules only at root trees. Technically a file like "subdir/.gitmodules" is not parsed by Git, but it's not unreasonable for us to declare that Git is aware of all ".gitmodules" files and make them eligible for checking. That lets us drop the root-tree requirement, which eliminates one pass entirely. And it makes our worst case much better: instead of potentially queueing every root tree to be re-examined, the worst case is that we queue each unique .gitmodules blob for a second look. This patch just adds the boilerplate to find .gitmodules files. The actual content checks will come in a subsequent commit. Signed-off-by: Jeff King <peff@peff.net>
2018-05-02 23:20:08 +02:00
FUNC(GITMODULES_MISSING, ERROR) \
FUNC(GITMODULES_BLOB, ERROR) \
FUNC(GITMODULES_PARSE, ERROR) \
FUNC(GITMODULES_NAME, ERROR) \
FUNC(GITMODULES_SYMLINK, ERROR) \
/* warnings */ \
FUNC(BAD_FILEMODE, WARN) \
FUNC(EMPTY_NAME, WARN) \
FUNC(FULL_PATHNAME, WARN) \
FUNC(HAS_DOT, WARN) \
FUNC(HAS_DOTDOT, WARN) \
FUNC(HAS_DOTGIT, WARN) \
FUNC(NULL_SHA1, WARN) \
FUNC(ZERO_PADDED_FILEMODE, WARN) \
FUNC(NUL_IN_COMMIT, WARN) \
/* infos (reported as warnings, but ignored by default) */ \
FUNC(BAD_TAG_NAME, INFO) \
FUNC(MISSING_TAGGER_ENTRY, INFO)
#define MSG_ID(id, msg_type) FSCK_MSG_##id,
enum fsck_msg_id {
FOREACH_MSG_ID(MSG_ID)
FSCK_MSG_MAX
};
#undef MSG_ID
#define STR(x) #x
#define MSG_ID(id, msg_type) { STR(id), NULL, FSCK_##msg_type },
static struct {
const char *id_string;
const char *downcased;
int msg_type;
} msg_id_info[FSCK_MSG_MAX + 1] = {
FOREACH_MSG_ID(MSG_ID)
{ NULL, NULL, -1 }
};
#undef MSG_ID
static int parse_msg_id(const char *text)
{
int i;
if (!msg_id_info[0].downcased) {
/* convert id_string to lower case, without underscores. */
for (i = 0; i < FSCK_MSG_MAX; i++) {
const char *p = msg_id_info[i].id_string;
int len = strlen(p);
char *q = xmalloc(len);
msg_id_info[i].downcased = q;
while (*p)
if (*p == '_')
p++;
else
*(q)++ = tolower(*(p)++);
*q = '\0';
}
}
for (i = 0; i < FSCK_MSG_MAX; i++)
if (!strcmp(text, msg_id_info[i].downcased))
return i;
return -1;
}
static int fsck_msg_type(enum fsck_msg_id msg_id,
struct fsck_options *options)
{
int msg_type;
assert(msg_id >= 0 && msg_id < FSCK_MSG_MAX);
if (options->msg_type)
msg_type = options->msg_type[msg_id];
else {
msg_type = msg_id_info[msg_id].msg_type;
if (options->strict && msg_type == FSCK_WARN)
msg_type = FSCK_ERROR;
}
return msg_type;
}
static void init_skiplist(struct fsck_options *options, const char *path)
{
static struct oid_array skiplist = OID_ARRAY_INIT;
int sorted, fd;
char buffer[GIT_MAX_HEXSZ + 1];
struct object_id oid;
if (options->skiplist)
sorted = options->skiplist->sorted;
else {
sorted = 1;
options->skiplist = &skiplist;
}
fd = open(path, O_RDONLY);
if (fd < 0)
die("Could not open skip list: %s", path);
for (;;) {
const char *p;
int result = read_in_full(fd, buffer, sizeof(buffer));
if (result < 0)
die_errno("Could not read '%s'", path);
if (!result)
break;
if (parse_oid_hex(buffer, &oid, &p) || *p != '\n')
die("Invalid SHA-1: %s", buffer);
oid_array_append(&skiplist, &oid);
if (sorted && skiplist.nr > 1 &&
oidcmp(&skiplist.oid[skiplist.nr - 2],
&oid) > 0)
sorted = 0;
}
close(fd);
if (sorted)
skiplist.sorted = 1;
}
static int parse_msg_type(const char *str)
{
if (!strcmp(str, "error"))
return FSCK_ERROR;
else if (!strcmp(str, "warn"))
return FSCK_WARN;
else if (!strcmp(str, "ignore"))
return FSCK_IGNORE;
else
die("Unknown fsck message type: '%s'", str);
}
int is_valid_msg_type(const char *msg_id, const char *msg_type)
{
if (parse_msg_id(msg_id) < 0)
return 0;
parse_msg_type(msg_type);
return 1;
}
void fsck_set_msg_type(struct fsck_options *options,
const char *msg_id, const char *msg_type)
{
int id = parse_msg_id(msg_id), type;
if (id < 0)
die("Unhandled message id: %s", msg_id);
type = parse_msg_type(msg_type);
if (type != FSCK_ERROR && msg_id_info[id].msg_type == FSCK_FATAL)
die("Cannot demote %s to %s", msg_id, msg_type);
if (!options->msg_type) {
int i;
int *msg_type;
ALLOC_ARRAY(msg_type, FSCK_MSG_MAX);
for (i = 0; i < FSCK_MSG_MAX; i++)
msg_type[i] = fsck_msg_type(i, options);
options->msg_type = msg_type;
}
options->msg_type[id] = type;
}
void fsck_set_msg_types(struct fsck_options *options, const char *values)
{
char *buf = xstrdup(values), *to_free = buf;
int done = 0;
while (!done) {
int len = strcspn(buf, " ,|"), equal;
done = !buf[len];
if (!len) {
buf++;
continue;
}
buf[len] = '\0';
for (equal = 0;
equal < len && buf[equal] != '=' && buf[equal] != ':';
equal++)
buf[equal] = tolower(buf[equal]);
buf[equal] = '\0';
if (!strcmp(buf, "skiplist")) {
if (equal == len)
die("skiplist requires a path");
init_skiplist(options, buf + equal + 1);
buf += len + 1;
continue;
}
if (equal == len)
die("Missing '=': '%s'", buf);
fsck_set_msg_type(options, buf, buf + equal + 1);
buf += len + 1;
}
free(to_free);
}
static void append_msg_id(struct strbuf *sb, const char *msg_id)
{
for (;;) {
char c = *(msg_id)++;
if (!c)
break;
if (c != '_')
strbuf_addch(sb, tolower(c));
else {
assert(*msg_id);
strbuf_addch(sb, *(msg_id)++);
}
}
strbuf_addstr(sb, ": ");
}
__attribute__((format (printf, 4, 5)))
static int report(struct fsck_options *options, struct object *object,
enum fsck_msg_id id, const char *fmt, ...)
{
va_list ap;
struct strbuf sb = STRBUF_INIT;
int msg_type = fsck_msg_type(id, options), result;
if (msg_type == FSCK_IGNORE)
return 0;
if (options->skiplist && object &&
oid_array_lookup(options->skiplist, &object->oid) >= 0)
return 0;
if (msg_type == FSCK_FATAL)
msg_type = FSCK_ERROR;
else if (msg_type == FSCK_INFO)
msg_type = FSCK_WARN;
append_msg_id(&sb, msg_id_info[id].id_string);
va_start(ap, fmt);
strbuf_vaddf(&sb, fmt, ap);
result = options->error_func(options, object, msg_type, sb.buf);
strbuf_release(&sb);
va_end(ap);
return result;
}
static char *get_object_name(struct fsck_options *options, struct object *obj)
{
if (!options->object_names)
return NULL;
return lookup_decoration(options->object_names, obj);
}
static void put_object_name(struct fsck_options *options, struct object *obj,
const char *fmt, ...)
{
va_list ap;
struct strbuf buf = STRBUF_INIT;
char *existing;
if (!options->object_names)
return;
existing = lookup_decoration(options->object_names, obj);
if (existing)
return;
va_start(ap, fmt);
strbuf_vaddf(&buf, fmt, ap);
add_decoration(options->object_names, obj, strbuf_detach(&buf, NULL));
va_end(ap);
}
static const char *describe_object(struct fsck_options *o, struct object *obj)
{
static struct strbuf buf = STRBUF_INIT;
char *name;
strbuf_reset(&buf);
strbuf_addstr(&buf, oid_to_hex(&obj->oid));
if (o->object_names && (name = lookup_decoration(o->object_names, obj)))
strbuf_addf(&buf, " (%s)", name);
return buf.buf;
}
static int fsck_walk_tree(struct tree *tree, void *data, struct fsck_options *options)
{
struct tree_desc desc;
struct name_entry entry;
int res = 0;
const char *name;
if (parse_tree(tree))
return -1;
name = get_object_name(options, &tree->object);
if (init_tree_desc_gently(&desc, tree->buffer, tree->size))
return -1;
while (tree_entry_gently(&desc, &entry)) {
struct object *obj;
int result;
if (S_ISGITLINK(entry.mode))
continue;
if (S_ISDIR(entry.mode)) {
obj = (struct object *)lookup_tree(entry.oid);
if (name && obj)
put_object_name(options, obj, "%s%s/", name,
entry.path);
result = options->walk(obj, OBJ_TREE, data, options);
}
else if (S_ISREG(entry.mode) || S_ISLNK(entry.mode)) {
obj = (struct object *)lookup_blob(entry.oid);
if (name && obj)
put_object_name(options, obj, "%s%s", name,
entry.path);
result = options->walk(obj, OBJ_BLOB, data, options);
}
else {
result = error("in tree %s: entry %s has bad mode %.6o",
describe_object(options, &tree->object), entry.path, entry.mode);
}
if (result < 0)
return result;
if (!res)
res = result;
}
return res;
}
static int fsck_walk_commit(struct commit *commit, void *data, struct fsck_options *options)
{
int counter = 0, generation = 0, name_prefix_len = 0;
struct commit_list *parents;
int res;
int result;
const char *name;
if (parse_commit(commit))
return -1;
name = get_object_name(options, &commit->object);
if (name)
put_object_name(options, &get_commit_tree(commit)->object,
"%s:", name);
result = options->walk((struct object *)get_commit_tree(commit),
OBJ_TREE, data, options);
if (result < 0)
return result;
res = result;
parents = commit->parents;
if (name && parents) {
int len = strlen(name), power;
if (len && name[len - 1] == '^') {
generation = 1;
name_prefix_len = len - 1;
}
else { /* parse ~<generation> suffix */
for (generation = 0, power = 1;
len && isdigit(name[len - 1]);
power *= 10)
generation += power * (name[--len] - '0');
if (power > 1 && len && name[len - 1] == '~')
name_prefix_len = len - 1;
}
}
while (parents) {
if (name) {
struct object *obj = &parents->item->object;
if (++counter > 1)
put_object_name(options, obj, "%s^%d",
name, counter);
else if (generation > 0)
put_object_name(options, obj, "%.*s~%d",
name_prefix_len, name, generation + 1);
else
put_object_name(options, obj, "%s^", name);
}
result = options->walk((struct object *)parents->item, OBJ_COMMIT, data, options);
if (result < 0)
return result;
if (!res)
res = result;
parents = parents->next;
}
return res;
}
static int fsck_walk_tag(struct tag *tag, void *data, struct fsck_options *options)
{
char *name = get_object_name(options, &tag->object);
if (parse_tag(tag))
return -1;
if (name)
put_object_name(options, tag->tagged, "%s", name);
return options->walk(tag->tagged, OBJ_ANY, data, options);
}
int fsck_walk(struct object *obj, void *data, struct fsck_options *options)
{
if (!obj)
return -1;
fsck: lazily load types under --connectivity-only The recent fixes to "fsck --connectivity-only" load all of the objects with their correct types. This keeps the connectivity-only code path close to the regular one, but it also introduces some unnecessary inefficiency. While getting the type of an object is cheap compared to actually opening and parsing the object (as the non-connectivity-only case would do), it's still not free. For reachable non-blob objects, we end up having to parse them later anyway (to see what they point to), making our type lookup here redundant. For unreachable objects, we might never hit them at all in the reachability traversal, making the lookup completely wasted. And in some cases, we might have quite a few unreachable objects (e.g., when alternates are used for shared object storage between repositories, it's normal for there to be objects reachable from other repositories but not the one running fsck). The comment in mark_object_for_connectivity() claims two benefits to getting the type up front: 1. We need to know the types during fsck_walk(). (And not explicitly mentioned, but we also need them when printing the types of broken or dangling commits). We can address this by lazy-loading the types as necessary. Most objects never need this lazy-load at all, because they fall into one of these categories: a. Reachable from our tips, and are coerced into the correct type as we traverse (e.g., a parent link will call lookup_commit(), which converts OBJ_NONE to OBJ_COMMIT). b. Unreachable, but not at the tip of a chunk of unreachable history. We only mention the tips as "dangling", so an unreachable commit which links to hundreds of other objects needs only report the type of the tip commit. 2. It serves as a cross-check that the coercion in (1a) is correct (i.e., we'll complain about a parent link that points to a blob). But we get most of this for free already, because right after coercing, we'll parse any non-blob objects. So we'd notice then if we expected a commit and got a blob. The one exception is when we expect a blob, in which case we never actually read the object contents. So this is a slight weakening, but given that the whole point of --connectivity-only is to sacrifice some data integrity checks for speed, this seems like an acceptable tradeoff. Here are before and after timings for an extreme case with ~5M reachable objects and another ~12M unreachable (it's the torvalds/linux repository on GitHub, connected to shared storage for all of the other kernel forks): [before] $ time git fsck --no-dangling --connectivity-only real 3m4.323s user 1m25.121s sys 1m38.710s [after] $ time git fsck --no-dangling --connectivity-only real 0m51.497s user 0m49.575s sys 0m1.776s Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-01-26 05:12:07 +01:00
if (obj->type == OBJ_NONE)
parse_object(&obj->oid);
fsck: lazily load types under --connectivity-only The recent fixes to "fsck --connectivity-only" load all of the objects with their correct types. This keeps the connectivity-only code path close to the regular one, but it also introduces some unnecessary inefficiency. While getting the type of an object is cheap compared to actually opening and parsing the object (as the non-connectivity-only case would do), it's still not free. For reachable non-blob objects, we end up having to parse them later anyway (to see what they point to), making our type lookup here redundant. For unreachable objects, we might never hit them at all in the reachability traversal, making the lookup completely wasted. And in some cases, we might have quite a few unreachable objects (e.g., when alternates are used for shared object storage between repositories, it's normal for there to be objects reachable from other repositories but not the one running fsck). The comment in mark_object_for_connectivity() claims two benefits to getting the type up front: 1. We need to know the types during fsck_walk(). (And not explicitly mentioned, but we also need them when printing the types of broken or dangling commits). We can address this by lazy-loading the types as necessary. Most objects never need this lazy-load at all, because they fall into one of these categories: a. Reachable from our tips, and are coerced into the correct type as we traverse (e.g., a parent link will call lookup_commit(), which converts OBJ_NONE to OBJ_COMMIT). b. Unreachable, but not at the tip of a chunk of unreachable history. We only mention the tips as "dangling", so an unreachable commit which links to hundreds of other objects needs only report the type of the tip commit. 2. It serves as a cross-check that the coercion in (1a) is correct (i.e., we'll complain about a parent link that points to a blob). But we get most of this for free already, because right after coercing, we'll parse any non-blob objects. So we'd notice then if we expected a commit and got a blob. The one exception is when we expect a blob, in which case we never actually read the object contents. So this is a slight weakening, but given that the whole point of --connectivity-only is to sacrifice some data integrity checks for speed, this seems like an acceptable tradeoff. Here are before and after timings for an extreme case with ~5M reachable objects and another ~12M unreachable (it's the torvalds/linux repository on GitHub, connected to shared storage for all of the other kernel forks): [before] $ time git fsck --no-dangling --connectivity-only real 3m4.323s user 1m25.121s sys 1m38.710s [after] $ time git fsck --no-dangling --connectivity-only real 0m51.497s user 0m49.575s sys 0m1.776s Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-01-26 05:12:07 +01:00
switch (obj->type) {
case OBJ_BLOB:
return 0;
case OBJ_TREE:
return fsck_walk_tree((struct tree *)obj, data, options);
case OBJ_COMMIT:
return fsck_walk_commit((struct commit *)obj, data, options);
case OBJ_TAG:
return fsck_walk_tag((struct tag *)obj, data, options);
default:
error("Unknown object type for %s", describe_object(options, obj));
return -1;
}
}
/*
* The entries in a tree are ordered in the _path_ order,
* which means that a directory entry is ordered by adding
* a slash to the end of it.
*
* So a directory called "a" is ordered _after_ a file
* called "a.c", because "a/" sorts after "a.c".
*/
#define TREE_UNORDERED (-1)
#define TREE_HAS_DUPS (-2)
static int verify_ordered(unsigned mode1, const char *name1, unsigned mode2, const char *name2)
{
int len1 = strlen(name1);
int len2 = strlen(name2);
int len = len1 < len2 ? len1 : len2;
unsigned char c1, c2;
int cmp;
cmp = memcmp(name1, name2, len);
if (cmp < 0)
return 0;
if (cmp > 0)
return TREE_UNORDERED;
/*
* Ok, the first <len> characters are the same.
* Now we need to order the next one, but turn
* a '\0' into a '/' for a directory entry.
*/
c1 = name1[len];
c2 = name2[len];
if (!c1 && !c2)
/*
* git-write-tree used to write out a nonsense tree that has
* entries with the same name, one blob and one tree. Make
* sure we do not have duplicate entries.
*/
return TREE_HAS_DUPS;
if (!c1 && S_ISDIR(mode1))
c1 = '/';
if (!c2 && S_ISDIR(mode2))
c2 = '/';
return c1 < c2 ? 0 : TREE_UNORDERED;
}
static int fsck_tree(struct tree *item, struct fsck_options *options)
{
int retval = 0;
int has_null_sha1 = 0;
int has_full_path = 0;
int has_empty_name = 0;
int has_dot = 0;
int has_dotdot = 0;
int has_dotgit = 0;
int has_zero_pad = 0;
int has_bad_modes = 0;
int has_dup_entries = 0;
int not_properly_sorted = 0;
struct tree_desc desc;
unsigned o_mode;
const char *o_name;
if (init_tree_desc_gently(&desc, item->buffer, item->size)) {
retval += report(options, &item->object, FSCK_MSG_BAD_TREE, "cannot be parsed as a tree");
return retval;
}
o_mode = 0;
o_name = NULL;
while (desc.size) {
unsigned mode;
const char *name;
const struct object_id *oid;
oid = tree_entry_extract(&desc, &name, &mode);
has_null_sha1 |= is_null_oid(oid);
has_full_path |= !!strchr(name, '/');
has_empty_name |= !*name;
has_dot |= !strcmp(name, ".");
has_dotdot |= !strcmp(name, "..");
has_dotgit |= is_hfs_dotgit(name) || is_ntfs_dotgit(name);
has_zero_pad |= *(char *)desc.buffer == '0';
fsck: detect gitmodules files In preparation for performing fsck checks on .gitmodules files, this commit plumbs in the actual detection of the files. Note that unlike most other fsck checks, this cannot be a property of a single object: we must know that the object is found at a ".gitmodules" path at the root tree of a commit. Since the fsck code only sees one object at a time, we have to mark the related objects to fit the puzzle together. When we see a commit we mark its tree as a root tree, and when we see a root tree with a .gitmodules file, we mark the corresponding blob to be checked. In an ideal world, we'd check the objects in topological order: commits followed by trees followed by blobs. In that case we can avoid ever loading an object twice, since all markings would be complete by the time we get to the marked objects. And indeed, if we are checking a single packfile, this is the order in which Git will generally write the objects. But we can't count on that: 1. git-fsck may show us the objects in arbitrary order (loose objects are fed in sha1 order, but we may also have multiple packs, and we process each pack fully in sequence). 2. The type ordering is just what git-pack-objects happens to write now. The pack format does not require a specific order, and it's possible that future versions of Git (or a custom version trying to fool official Git's fsck checks!) may order it differently. 3. We may not even be fscking all of the relevant objects at once. Consider pushing with transfer.fsckObjects, where one push adds a blob at path "foo", and then a second push adds the same blob at path ".gitmodules". The blob is not part of the second push at all, but we need to mark and check it. So in the general case, we need to make up to three passes over the objects: once to make sure we've seen all commits, then once to cover any trees we might have missed, and then a final pass to cover any .gitmodules blobs we found in the second pass. We can simplify things a bit by loosening the requirement that we find .gitmodules only at root trees. Technically a file like "subdir/.gitmodules" is not parsed by Git, but it's not unreasonable for us to declare that Git is aware of all ".gitmodules" files and make them eligible for checking. That lets us drop the root-tree requirement, which eliminates one pass entirely. And it makes our worst case much better: instead of potentially queueing every root tree to be re-examined, the worst case is that we queue each unique .gitmodules blob for a second look. This patch just adds the boilerplate to find .gitmodules files. The actual content checks will come in a subsequent commit. Signed-off-by: Jeff King <peff@peff.net>
2018-05-02 23:20:08 +02:00
if (is_hfs_dotgitmodules(name) || is_ntfs_dotgitmodules(name)) {
if (!S_ISLNK(mode))
oidset_insert(&gitmodules_found, oid);
else
retval += report(options, &item->object,
FSCK_MSG_GITMODULES_SYMLINK,
".gitmodules is a symbolic link");
}
fsck: detect gitmodules files In preparation for performing fsck checks on .gitmodules files, this commit plumbs in the actual detection of the files. Note that unlike most other fsck checks, this cannot be a property of a single object: we must know that the object is found at a ".gitmodules" path at the root tree of a commit. Since the fsck code only sees one object at a time, we have to mark the related objects to fit the puzzle together. When we see a commit we mark its tree as a root tree, and when we see a root tree with a .gitmodules file, we mark the corresponding blob to be checked. In an ideal world, we'd check the objects in topological order: commits followed by trees followed by blobs. In that case we can avoid ever loading an object twice, since all markings would be complete by the time we get to the marked objects. And indeed, if we are checking a single packfile, this is the order in which Git will generally write the objects. But we can't count on that: 1. git-fsck may show us the objects in arbitrary order (loose objects are fed in sha1 order, but we may also have multiple packs, and we process each pack fully in sequence). 2. The type ordering is just what git-pack-objects happens to write now. The pack format does not require a specific order, and it's possible that future versions of Git (or a custom version trying to fool official Git's fsck checks!) may order it differently. 3. We may not even be fscking all of the relevant objects at once. Consider pushing with transfer.fsckObjects, where one push adds a blob at path "foo", and then a second push adds the same blob at path ".gitmodules". The blob is not part of the second push at all, but we need to mark and check it. So in the general case, we need to make up to three passes over the objects: once to make sure we've seen all commits, then once to cover any trees we might have missed, and then a final pass to cover any .gitmodules blobs we found in the second pass. We can simplify things a bit by loosening the requirement that we find .gitmodules only at root trees. Technically a file like "subdir/.gitmodules" is not parsed by Git, but it's not unreasonable for us to declare that Git is aware of all ".gitmodules" files and make them eligible for checking. That lets us drop the root-tree requirement, which eliminates one pass entirely. And it makes our worst case much better: instead of potentially queueing every root tree to be re-examined, the worst case is that we queue each unique .gitmodules blob for a second look. This patch just adds the boilerplate to find .gitmodules files. The actual content checks will come in a subsequent commit. Signed-off-by: Jeff King <peff@peff.net>
2018-05-02 23:20:08 +02:00
if (update_tree_entry_gently(&desc)) {
retval += report(options, &item->object, FSCK_MSG_BAD_TREE, "cannot be parsed as a tree");
break;
}
switch (mode) {
/*
* Standard modes..
*/
case S_IFREG | 0755:
case S_IFREG | 0644:
case S_IFLNK:
case S_IFDIR:
case S_IFGITLINK:
break;
/*
* This is nonstandard, but we had a few of these
* early on when we honored the full set of mode
* bits..
*/
case S_IFREG | 0664:
if (!options->strict)
break;
consistently use "fallthrough" comments in switches Gcc 7 adds -Wimplicit-fallthrough, which can warn when a switch case falls through to the next case. The general idea is that the compiler can't tell if this was intentional or not, so you should annotate any intentional fall-throughs as such, leaving it to complain about any unannotated ones. There's a GNU __attribute__ which can be used for annotation, but of course we'd have to #ifdef it away on non-gcc compilers. Gcc will also recognize specially-formatted comments, which matches our current practice. Let's extend that practice to all of the unannotated sites (which I did look over and verify that they were behaving as intended). Ideally in each case we'd actually give some reasons in the comment about why we're falling through, or what we're falling through to. And gcc does support that with -Wimplicit-fallthrough=2, which relaxes the comment pattern matching to anything that contains "fallthrough" (or a variety of spelling variants). However, this isn't the default for -Wimplicit-fallthrough, nor for -Wextra. In the name of simplicity, it's probably better for us to support the default level, which requires "fallthrough" to be the only thing in the comment (modulo some window dressing like "else" and some punctuation; see the gcc manual for the complete set of patterns). This patch suppresses all warnings due to -Wimplicit-fallthrough. We might eventually want to add that to the DEVELOPER Makefile knob, but we should probably wait until gcc 7 is more widely adopted (since earlier versions will complain about the unknown warning type). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-21 08:25:41 +02:00
/* fallthrough */
default:
has_bad_modes = 1;
}
if (o_name) {
switch (verify_ordered(o_mode, o_name, mode, name)) {
case TREE_UNORDERED:
not_properly_sorted = 1;
break;
case TREE_HAS_DUPS:
has_dup_entries = 1;
break;
default:
break;
}
}
o_mode = mode;
o_name = name;
}
if (has_null_sha1)
retval += report(options, &item->object, FSCK_MSG_NULL_SHA1, "contains entries pointing to null sha1");
if (has_full_path)
retval += report(options, &item->object, FSCK_MSG_FULL_PATHNAME, "contains full pathnames");
if (has_empty_name)
retval += report(options, &item->object, FSCK_MSG_EMPTY_NAME, "contains empty pathname");
if (has_dot)
retval += report(options, &item->object, FSCK_MSG_HAS_DOT, "contains '.'");
if (has_dotdot)
retval += report(options, &item->object, FSCK_MSG_HAS_DOTDOT, "contains '..'");
if (has_dotgit)
retval += report(options, &item->object, FSCK_MSG_HAS_DOTGIT, "contains '.git'");
if (has_zero_pad)
retval += report(options, &item->object, FSCK_MSG_ZERO_PADDED_FILEMODE, "contains zero-padded file modes");
if (has_bad_modes)
retval += report(options, &item->object, FSCK_MSG_BAD_FILEMODE, "contains bad file modes");
if (has_dup_entries)
retval += report(options, &item->object, FSCK_MSG_DUPLICATE_ENTRIES, "contains duplicate file entries");
if (not_properly_sorted)
retval += report(options, &item->object, FSCK_MSG_TREE_NOT_SORTED, "not properly sorted");
return retval;
}
fsck: it is OK for a tag and a commit to lack the body When fsck validates a commit or a tag, it scans each line in the header of the object using helper functions such as "start_with()", etc. that work on a NUL terminated buffer, but before a1e920a0 (index-pack: terminate object buffers with NUL, 2014-12-08), the validation functions were fed the object data in a piece of memory that is not necessarily terminated with a NUL. We added a helper function require_end_of_header() to be called at the beginning of these validation functions to insist that the object data contains an empty line before its end. The theory is that the validating functions will notice and stop when it hits an empty line as a normal end of header (or a required header line that is missing) without scanning past the end of potentially not NUL-terminated buffer. But the theory forgot that in the older days, Git itself happily created objects with only the header lines without a body. This caused Git 2.2 and later to issue an unnecessary warning in some existing repositories. With a1e920a0, we do not need to require an empty line (or the body) in these objects to safely parse and validate them. Drop the offending "must have an empty line" check from this helper function, while keeping the other check to make sure that there is no NUL in the header part of the object, and adjust the name of the helper to what it does accordingly. Noticed-by: Wolfgang Denk <wd@denx.de> Helped-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-06-28 20:18:31 +02:00
static int verify_headers(const void *data, unsigned long size,
struct object *obj, struct fsck_options *options)
{
const char *buffer = (const char *)data;
unsigned long i;
for (i = 0; i < size; i++) {
switch (buffer[i]) {
case '\0':
return report(options, obj,
FSCK_MSG_NUL_IN_HEADER,
"unterminated header: NUL at offset %ld", i);
case '\n':
if (i + 1 < size && buffer[i + 1] == '\n')
return 0;
}
}
fsck: it is OK for a tag and a commit to lack the body When fsck validates a commit or a tag, it scans each line in the header of the object using helper functions such as "start_with()", etc. that work on a NUL terminated buffer, but before a1e920a0 (index-pack: terminate object buffers with NUL, 2014-12-08), the validation functions were fed the object data in a piece of memory that is not necessarily terminated with a NUL. We added a helper function require_end_of_header() to be called at the beginning of these validation functions to insist that the object data contains an empty line before its end. The theory is that the validating functions will notice and stop when it hits an empty line as a normal end of header (or a required header line that is missing) without scanning past the end of potentially not NUL-terminated buffer. But the theory forgot that in the older days, Git itself happily created objects with only the header lines without a body. This caused Git 2.2 and later to issue an unnecessary warning in some existing repositories. With a1e920a0, we do not need to require an empty line (or the body) in these objects to safely parse and validate them. Drop the offending "must have an empty line" check from this helper function, while keeping the other check to make sure that there is no NUL in the header part of the object, and adjust the name of the helper to what it does accordingly. Noticed-by: Wolfgang Denk <wd@denx.de> Helped-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-06-28 20:18:31 +02:00
/*
* We did not find double-LF that separates the header
* and the body. Not having a body is not a crime but
* we do want to see the terminating LF for the last header
* line.
*/
if (size && buffer[size - 1] == '\n')
return 0;
return report(options, obj,
FSCK_MSG_UNTERMINATED_HEADER, "unterminated header");
}
static int fsck_ident(const char **ident, struct object *obj, struct fsck_options *options)
{
const char *p = *ident;
char *end;
*ident = strchrnul(*ident, '\n');
if (**ident == '\n')
(*ident)++;
if (*p == '<')
return report(options, obj, FSCK_MSG_MISSING_NAME_BEFORE_EMAIL, "invalid author/committer line - missing space before email");
p += strcspn(p, "<>\n");
if (*p == '>')
return report(options, obj, FSCK_MSG_BAD_NAME, "invalid author/committer line - bad name");
if (*p != '<')
return report(options, obj, FSCK_MSG_MISSING_EMAIL, "invalid author/committer line - missing email");
if (p[-1] != ' ')
return report(options, obj, FSCK_MSG_MISSING_SPACE_BEFORE_EMAIL, "invalid author/committer line - missing space before email");
p++;
p += strcspn(p, "<>\n");
if (*p != '>')
return report(options, obj, FSCK_MSG_BAD_EMAIL, "invalid author/committer line - bad email");
p++;
if (*p != ' ')
return report(options, obj, FSCK_MSG_MISSING_SPACE_BEFORE_DATE, "invalid author/committer line - missing space before date");
p++;
if (*p == '0' && p[1] != ' ')
return report(options, obj, FSCK_MSG_ZERO_PADDED_DATE, "invalid author/committer line - zero-padded date");
if (date_overflows(parse_timestamp(p, &end, 10)))
return report(options, obj, FSCK_MSG_BAD_DATE_OVERFLOW, "invalid author/committer line - date causes integer overflow");
if ((end == p || *end != ' '))
return report(options, obj, FSCK_MSG_BAD_DATE, "invalid author/committer line - bad date");
p = end + 1;
if ((*p != '+' && *p != '-') ||
!isdigit(p[1]) ||
!isdigit(p[2]) ||
!isdigit(p[3]) ||
!isdigit(p[4]) ||
(p[5] != '\n'))
return report(options, obj, FSCK_MSG_BAD_TIMEZONE, "invalid author/committer line - bad time zone");
p += 6;
return 0;
}
static int fsck_commit_buffer(struct commit *commit, const char *buffer,
unsigned long size, struct fsck_options *options)
{
struct object_id tree_oid, oid;
struct commit_graft *graft;
unsigned parent_count, parent_line_count = 0, author_count;
int err;
const char *buffer_begin = buffer;
const char *p;
if (verify_headers(buffer, size, &commit->object, options))
return -1;
if (!skip_prefix(buffer, "tree ", &buffer))
return report(options, &commit->object, FSCK_MSG_MISSING_TREE, "invalid format - expected 'tree' line");
if (parse_oid_hex(buffer, &tree_oid, &p) || *p != '\n') {
err = report(options, &commit->object, FSCK_MSG_BAD_TREE_SHA1, "invalid 'tree' line format - bad sha1");
if (err)
return err;
}
buffer = p + 1;
while (skip_prefix(buffer, "parent ", &buffer)) {
if (parse_oid_hex(buffer, &oid, &p) || *p != '\n') {
err = report(options, &commit->object, FSCK_MSG_BAD_PARENT_SHA1, "invalid 'parent' line format - bad sha1");
if (err)
return err;
}
buffer = p + 1;
parent_line_count++;
}
graft = lookup_commit_graft(&commit->object.oid);
parent_count = commit_list_count(commit->parents);
if (graft) {
if (graft->nr_parent == -1 && !parent_count)
; /* shallow commit */
else if (graft->nr_parent != parent_count) {
err = report(options, &commit->object, FSCK_MSG_MISSING_GRAFT, "graft objects missing");
if (err)
return err;
}
} else {
if (parent_count != parent_line_count) {
err = report(options, &commit->object, FSCK_MSG_MISSING_PARENT, "parent objects missing");
if (err)
return err;
}
}
author_count = 0;
while (skip_prefix(buffer, "author ", &buffer)) {
author_count++;
err = fsck_ident(&buffer, &commit->object, options);
if (err)
return err;
}
if (author_count < 1)
err = report(options, &commit->object, FSCK_MSG_MISSING_AUTHOR, "invalid format - expected 'author' line");
else if (author_count > 1)
err = report(options, &commit->object, FSCK_MSG_MULTIPLE_AUTHORS, "invalid format - multiple 'author' lines");
if (err)
return err;
if (!skip_prefix(buffer, "committer ", &buffer))
return report(options, &commit->object, FSCK_MSG_MISSING_COMMITTER, "invalid format - expected 'committer' line");
err = fsck_ident(&buffer, &commit->object, options);
if (err)
return err;
if (!get_commit_tree(commit)) {
err = report(options, &commit->object, FSCK_MSG_BAD_TREE, "could not load commit's tree %s", oid_to_hex(&tree_oid));
if (err)
return err;
}
if (memchr(buffer_begin, '\0', size)) {
err = report(options, &commit->object, FSCK_MSG_NUL_IN_COMMIT,
"NUL byte in the commit object body");
if (err)
return err;
}
return 0;
}
static int fsck_commit(struct commit *commit, const char *data,
unsigned long size, struct fsck_options *options)
{
const char *buffer = data ? data : get_commit_buffer(commit, &size);
int ret = fsck_commit_buffer(commit, buffer, size, options);
if (!data)
unuse_commit_buffer(commit, buffer);
return ret;
}
static int fsck_tag_buffer(struct tag *tag, const char *data,
unsigned long size, struct fsck_options *options)
{
struct object_id oid;
int ret = 0;
const char *buffer;
char *to_free = NULL, *eol;
struct strbuf sb = STRBUF_INIT;
const char *p;
if (data)
buffer = data;
else {
enum object_type type;
buffer = to_free =
read_object_file(&tag->object.oid, &type, &size);
if (!buffer)
return report(options, &tag->object,
FSCK_MSG_MISSING_TAG_OBJECT,
"cannot read tag object");
if (type != OBJ_TAG) {
ret = report(options, &tag->object,
FSCK_MSG_TAG_OBJECT_NOT_TAG,
"expected tag got %s",
type_name(type));
goto done;
}
}
ret = verify_headers(buffer, size, &tag->object, options);
if (ret)
goto done;
if (!skip_prefix(buffer, "object ", &buffer)) {
ret = report(options, &tag->object, FSCK_MSG_MISSING_OBJECT, "invalid format - expected 'object' line");
goto done;
}
if (parse_oid_hex(buffer, &oid, &p) || *p != '\n') {
ret = report(options, &tag->object, FSCK_MSG_BAD_OBJECT_SHA1, "invalid 'object' line format - bad sha1");
if (ret)
goto done;
}
buffer = p + 1;
if (!skip_prefix(buffer, "type ", &buffer)) {
ret = report(options, &tag->object, FSCK_MSG_MISSING_TYPE_ENTRY, "invalid format - expected 'type' line");
goto done;
}
eol = strchr(buffer, '\n');
if (!eol) {
ret = report(options, &tag->object, FSCK_MSG_MISSING_TYPE, "invalid format - unexpected end after 'type' line");
goto done;
}
if (type_from_string_gently(buffer, eol - buffer, 1) < 0)
ret = report(options, &tag->object, FSCK_MSG_BAD_TYPE, "invalid 'type' value");
if (ret)
goto done;
buffer = eol + 1;
if (!skip_prefix(buffer, "tag ", &buffer)) {
ret = report(options, &tag->object, FSCK_MSG_MISSING_TAG_ENTRY, "invalid format - expected 'tag' line");
goto done;
}
eol = strchr(buffer, '\n');
if (!eol) {
ret = report(options, &tag->object, FSCK_MSG_MISSING_TAG, "invalid format - unexpected end after 'type' line");
goto done;
}
strbuf_addf(&sb, "refs/tags/%.*s", (int)(eol - buffer), buffer);
if (check_refname_format(sb.buf, 0)) {
ret = report(options, &tag->object, FSCK_MSG_BAD_TAG_NAME,
"invalid 'tag' name: %.*s",
(int)(eol - buffer), buffer);
if (ret)
goto done;
}
buffer = eol + 1;
if (!skip_prefix(buffer, "tagger ", &buffer)) {
/* early tags do not contain 'tagger' lines; warn only */
ret = report(options, &tag->object, FSCK_MSG_MISSING_TAGGER_ENTRY, "invalid format - expected 'tagger' line");
if (ret)
goto done;
}
else
ret = fsck_ident(&buffer, &tag->object, options);
done:
strbuf_release(&sb);
free(to_free);
return ret;
}
static int fsck_tag(struct tag *tag, const char *data,
unsigned long size, struct fsck_options *options)
{
struct object *tagged = tag->tagged;
if (!tagged)
return report(options, &tag->object, FSCK_MSG_BAD_TAG_OBJECT, "could not load tagged object");
return fsck_tag_buffer(tag, data, size, options);
}
struct fsck_gitmodules_data {
struct object *obj;
struct fsck_options *options;
int ret;
};
static int fsck_gitmodules_fn(const char *var, const char *value, void *vdata)
{
struct fsck_gitmodules_data *data = vdata;
const char *subsection, *key;
int subsection_len;
char *name;
if (parse_config_key(var, "submodule", &subsection, &subsection_len, &key) < 0 ||
!subsection)
return 0;
name = xmemdupz(subsection, subsection_len);
if (check_submodule_name(name) < 0)
data->ret |= report(data->options, data->obj,
FSCK_MSG_GITMODULES_NAME,
"disallowed submodule name: %s",
name);
free(name);
return 0;
}
static int fsck_blob(struct blob *blob, const char *buf,
unsigned long size, struct fsck_options *options)
{
struct fsck_gitmodules_data data;
if (!oidset_contains(&gitmodules_found, &blob->object.oid))
return 0;
oidset_insert(&gitmodules_done, &blob->object.oid);
if (!buf) {
/*
* A missing buffer here is a sign that the caller found the
* blob too gigantic to load into memory. Let's just consider
* that an error.
*/
return report(options, &blob->object,
FSCK_MSG_GITMODULES_PARSE,
".gitmodules too large to parse");
}
data.obj = &blob->object;
data.options = options;
data.ret = 0;
if (git_config_from_mem(fsck_gitmodules_fn, CONFIG_ORIGIN_BLOB,
".gitmodules", buf, size, &data))
data.ret |= report(options, &blob->object,
FSCK_MSG_GITMODULES_PARSE,
"could not parse gitmodules blob");
return data.ret;
}
int fsck_object(struct object *obj, void *data, unsigned long size,
struct fsck_options *options)
{
if (!obj)
return report(options, obj, FSCK_MSG_BAD_OBJECT_SHA1, "no valid object to fsck");
if (obj->type == OBJ_BLOB)
return fsck_blob((struct blob *)obj, data, size, options);
if (obj->type == OBJ_TREE)
return fsck_tree((struct tree *) obj, options);
if (obj->type == OBJ_COMMIT)
return fsck_commit((struct commit *) obj, (const char *) data,
size, options);
if (obj->type == OBJ_TAG)
return fsck_tag((struct tag *) obj, (const char *) data,
size, options);
return report(options, obj, FSCK_MSG_UNKNOWN_TYPE, "unknown type '%d' (internal fsck error)",
obj->type);
}
int fsck_error_function(struct fsck_options *o,
struct object *obj, int msg_type, const char *message)
{
if (msg_type == FSCK_WARN) {
warning("object %s: %s", describe_object(o, obj), message);
return 0;
}
error("object %s: %s", describe_object(o, obj), message);
return 1;
}
fsck: detect gitmodules files In preparation for performing fsck checks on .gitmodules files, this commit plumbs in the actual detection of the files. Note that unlike most other fsck checks, this cannot be a property of a single object: we must know that the object is found at a ".gitmodules" path at the root tree of a commit. Since the fsck code only sees one object at a time, we have to mark the related objects to fit the puzzle together. When we see a commit we mark its tree as a root tree, and when we see a root tree with a .gitmodules file, we mark the corresponding blob to be checked. In an ideal world, we'd check the objects in topological order: commits followed by trees followed by blobs. In that case we can avoid ever loading an object twice, since all markings would be complete by the time we get to the marked objects. And indeed, if we are checking a single packfile, this is the order in which Git will generally write the objects. But we can't count on that: 1. git-fsck may show us the objects in arbitrary order (loose objects are fed in sha1 order, but we may also have multiple packs, and we process each pack fully in sequence). 2. The type ordering is just what git-pack-objects happens to write now. The pack format does not require a specific order, and it's possible that future versions of Git (or a custom version trying to fool official Git's fsck checks!) may order it differently. 3. We may not even be fscking all of the relevant objects at once. Consider pushing with transfer.fsckObjects, where one push adds a blob at path "foo", and then a second push adds the same blob at path ".gitmodules". The blob is not part of the second push at all, but we need to mark and check it. So in the general case, we need to make up to three passes over the objects: once to make sure we've seen all commits, then once to cover any trees we might have missed, and then a final pass to cover any .gitmodules blobs we found in the second pass. We can simplify things a bit by loosening the requirement that we find .gitmodules only at root trees. Technically a file like "subdir/.gitmodules" is not parsed by Git, but it's not unreasonable for us to declare that Git is aware of all ".gitmodules" files and make them eligible for checking. That lets us drop the root-tree requirement, which eliminates one pass entirely. And it makes our worst case much better: instead of potentially queueing every root tree to be re-examined, the worst case is that we queue each unique .gitmodules blob for a second look. This patch just adds the boilerplate to find .gitmodules files. The actual content checks will come in a subsequent commit. Signed-off-by: Jeff King <peff@peff.net>
2018-05-02 23:20:08 +02:00
int fsck_finish(struct fsck_options *options)
{
int ret = 0;
struct oidset_iter iter;
const struct object_id *oid;
oidset_iter_init(&gitmodules_found, &iter);
while ((oid = oidset_iter_next(&iter))) {
struct blob *blob;
enum object_type type;
unsigned long size;
char *buf;
if (oidset_contains(&gitmodules_done, oid))
continue;
blob = lookup_blob(oid);
if (!blob) {
struct object *obj = lookup_unknown_object(oid->hash);
ret |= report(options, obj,
fsck: detect gitmodules files In preparation for performing fsck checks on .gitmodules files, this commit plumbs in the actual detection of the files. Note that unlike most other fsck checks, this cannot be a property of a single object: we must know that the object is found at a ".gitmodules" path at the root tree of a commit. Since the fsck code only sees one object at a time, we have to mark the related objects to fit the puzzle together. When we see a commit we mark its tree as a root tree, and when we see a root tree with a .gitmodules file, we mark the corresponding blob to be checked. In an ideal world, we'd check the objects in topological order: commits followed by trees followed by blobs. In that case we can avoid ever loading an object twice, since all markings would be complete by the time we get to the marked objects. And indeed, if we are checking a single packfile, this is the order in which Git will generally write the objects. But we can't count on that: 1. git-fsck may show us the objects in arbitrary order (loose objects are fed in sha1 order, but we may also have multiple packs, and we process each pack fully in sequence). 2. The type ordering is just what git-pack-objects happens to write now. The pack format does not require a specific order, and it's possible that future versions of Git (or a custom version trying to fool official Git's fsck checks!) may order it differently. 3. We may not even be fscking all of the relevant objects at once. Consider pushing with transfer.fsckObjects, where one push adds a blob at path "foo", and then a second push adds the same blob at path ".gitmodules". The blob is not part of the second push at all, but we need to mark and check it. So in the general case, we need to make up to three passes over the objects: once to make sure we've seen all commits, then once to cover any trees we might have missed, and then a final pass to cover any .gitmodules blobs we found in the second pass. We can simplify things a bit by loosening the requirement that we find .gitmodules only at root trees. Technically a file like "subdir/.gitmodules" is not parsed by Git, but it's not unreasonable for us to declare that Git is aware of all ".gitmodules" files and make them eligible for checking. That lets us drop the root-tree requirement, which eliminates one pass entirely. And it makes our worst case much better: instead of potentially queueing every root tree to be re-examined, the worst case is that we queue each unique .gitmodules blob for a second look. This patch just adds the boilerplate to find .gitmodules files. The actual content checks will come in a subsequent commit. Signed-off-by: Jeff King <peff@peff.net>
2018-05-02 23:20:08 +02:00
FSCK_MSG_GITMODULES_BLOB,
"non-blob found at .gitmodules");
continue;
}
buf = read_object_file(oid, &type, &size);
fsck: detect gitmodules files In preparation for performing fsck checks on .gitmodules files, this commit plumbs in the actual detection of the files. Note that unlike most other fsck checks, this cannot be a property of a single object: we must know that the object is found at a ".gitmodules" path at the root tree of a commit. Since the fsck code only sees one object at a time, we have to mark the related objects to fit the puzzle together. When we see a commit we mark its tree as a root tree, and when we see a root tree with a .gitmodules file, we mark the corresponding blob to be checked. In an ideal world, we'd check the objects in topological order: commits followed by trees followed by blobs. In that case we can avoid ever loading an object twice, since all markings would be complete by the time we get to the marked objects. And indeed, if we are checking a single packfile, this is the order in which Git will generally write the objects. But we can't count on that: 1. git-fsck may show us the objects in arbitrary order (loose objects are fed in sha1 order, but we may also have multiple packs, and we process each pack fully in sequence). 2. The type ordering is just what git-pack-objects happens to write now. The pack format does not require a specific order, and it's possible that future versions of Git (or a custom version trying to fool official Git's fsck checks!) may order it differently. 3. We may not even be fscking all of the relevant objects at once. Consider pushing with transfer.fsckObjects, where one push adds a blob at path "foo", and then a second push adds the same blob at path ".gitmodules". The blob is not part of the second push at all, but we need to mark and check it. So in the general case, we need to make up to three passes over the objects: once to make sure we've seen all commits, then once to cover any trees we might have missed, and then a final pass to cover any .gitmodules blobs we found in the second pass. We can simplify things a bit by loosening the requirement that we find .gitmodules only at root trees. Technically a file like "subdir/.gitmodules" is not parsed by Git, but it's not unreasonable for us to declare that Git is aware of all ".gitmodules" files and make them eligible for checking. That lets us drop the root-tree requirement, which eliminates one pass entirely. And it makes our worst case much better: instead of potentially queueing every root tree to be re-examined, the worst case is that we queue each unique .gitmodules blob for a second look. This patch just adds the boilerplate to find .gitmodules files. The actual content checks will come in a subsequent commit. Signed-off-by: Jeff King <peff@peff.net>
2018-05-02 23:20:08 +02:00
if (!buf) {
if (is_promisor_object(&blob->object.oid))
continue;
fsck: detect gitmodules files In preparation for performing fsck checks on .gitmodules files, this commit plumbs in the actual detection of the files. Note that unlike most other fsck checks, this cannot be a property of a single object: we must know that the object is found at a ".gitmodules" path at the root tree of a commit. Since the fsck code only sees one object at a time, we have to mark the related objects to fit the puzzle together. When we see a commit we mark its tree as a root tree, and when we see a root tree with a .gitmodules file, we mark the corresponding blob to be checked. In an ideal world, we'd check the objects in topological order: commits followed by trees followed by blobs. In that case we can avoid ever loading an object twice, since all markings would be complete by the time we get to the marked objects. And indeed, if we are checking a single packfile, this is the order in which Git will generally write the objects. But we can't count on that: 1. git-fsck may show us the objects in arbitrary order (loose objects are fed in sha1 order, but we may also have multiple packs, and we process each pack fully in sequence). 2. The type ordering is just what git-pack-objects happens to write now. The pack format does not require a specific order, and it's possible that future versions of Git (or a custom version trying to fool official Git's fsck checks!) may order it differently. 3. We may not even be fscking all of the relevant objects at once. Consider pushing with transfer.fsckObjects, where one push adds a blob at path "foo", and then a second push adds the same blob at path ".gitmodules". The blob is not part of the second push at all, but we need to mark and check it. So in the general case, we need to make up to three passes over the objects: once to make sure we've seen all commits, then once to cover any trees we might have missed, and then a final pass to cover any .gitmodules blobs we found in the second pass. We can simplify things a bit by loosening the requirement that we find .gitmodules only at root trees. Technically a file like "subdir/.gitmodules" is not parsed by Git, but it's not unreasonable for us to declare that Git is aware of all ".gitmodules" files and make them eligible for checking. That lets us drop the root-tree requirement, which eliminates one pass entirely. And it makes our worst case much better: instead of potentially queueing every root tree to be re-examined, the worst case is that we queue each unique .gitmodules blob for a second look. This patch just adds the boilerplate to find .gitmodules files. The actual content checks will come in a subsequent commit. Signed-off-by: Jeff King <peff@peff.net>
2018-05-02 23:20:08 +02:00
ret |= report(options, &blob->object,
FSCK_MSG_GITMODULES_MISSING,
"unable to read .gitmodules blob");
continue;
}
if (type == OBJ_BLOB)
ret |= fsck_blob(blob, buf, size, options);
else
ret |= report(options, &blob->object,
FSCK_MSG_GITMODULES_BLOB,
"non-blob found at .gitmodules");
free(buf);
}
oidset_clear(&gitmodules_found);
oidset_clear(&gitmodules_done);
return ret;
}