pack-objects: fix pack generation when using pack_size_limit

Current handling of pack_size_limit is quite suboptimal.  Let's consider
a list of objects to pack which contain alternatively big and small
objects (which pretty matches reality when big blobs are interlaced
with tree objects).  Currently, the code simply close the pack and opens
a new one when the next object in line breaks the size limit.

The current code may degenerate into:

  - small tree object => store into pack #1
  - big blob object busting the pack size limit => store into pack #2
  - small blob but pack #2 is over the limit already => pack #3
  - big blob busting the size limit => pack #4
  - small tree but pack #4 is over the limit => pack #5
  - big blob => pack #6
  - small tree => pack #7
  - ... and so on.

The reality is that the content of packs 1, 3, 5 and 7 could well be
stored more efficiently (and delta compressed) together in pack #1 if
the big blobs were not forcing an immediate transition to a new pack.

Incidentally this can be fixed pretty easily by simply skipping over
those objects that are too big to fit in the current pack while trying
the whole list of unwritten objects, and then that list considered from
the beginning again when a new pack is opened.  This creates much fewer
smallish pack files and help making more predictable test cases for the
test suite.

This change made one of the self sanity checks useless so it is removed
as well. That check was rather redundant already anyway.

Signed-off-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This commit is contained in:
Nicolas Pitre 2010-02-03 22:48:27 -05:00 committed by Junio C Hamano
parent 2fca19fbb5
commit a2430dde8c

View File

@ -246,7 +246,7 @@ static unsigned long write_object(struct sha1file *f,
type = entry->type;
/* write limit if limited packsize and not first object */
/* apply size limit if limited packsize and not first object */
if (!pack_size_limit || !nr_written)
limit = 0;
else if (pack_size_limit <= write_offset)
@ -443,11 +443,15 @@ static int write_one(struct sha1file *f,
/* offset is non zero if object is written already. */
if (e->idx.offset || e->preferred_base)
return 1;
return -1;
/* if we are deltified, write out base object first. */
if (e->delta && !write_one(f, e->delta, offset))
return 0;
/*
* If we are deltified, attempt to write out base object first.
* If that fails due to the pack size limit then the current
* object might still possibly fit undeltified within that limit.
*/
if (e->delta)
write_one(f, e->delta, offset);
e->idx.offset = *offset;
size = write_object(f, e, *offset);
@ -501,11 +505,9 @@ static void write_pack_file(void)
sha1write(f, &hdr, sizeof(hdr));
offset = sizeof(hdr);
nr_written = 0;
for (; i < nr_objects; i++) {
if (!write_one(f, objects + i, &offset))
break;
display_progress(progress_state, written);
}
for (i = 0; i < nr_objects; i++)
if (write_one(f, objects + i, &offset) == 1)
display_progress(progress_state, written);
/*
* Did we write the wrong # entries in the header?
@ -580,26 +582,13 @@ static void write_pack_file(void)
written_list[j]->offset = (off_t)-1;
}
nr_remaining -= nr_written;
} while (nr_remaining && i < nr_objects);
} while (nr_remaining);
free(written_list);
stop_progress(&progress_state);
if (written != nr_result)
die("wrote %"PRIu32" objects while expecting %"PRIu32,
written, nr_result);
/*
* We have scanned through [0 ... i). Since we have written
* the correct number of objects, the remaining [i ... nr_objects)
* items must be either already written (due to out-of-order delta base)
* or a preferred base. Count those which are neither and complain if any.
*/
for (j = 0; i < nr_objects; i++) {
struct object_entry *e = objects + i;
j += !e->idx.offset && !e->preferred_base;
}
if (j)
die("wrote %"PRIu32" objects as expected but %"PRIu32
" unwritten", written, j);
}
static int locate_object_entry_hash(const unsigned char *sha1)