2007-03-14 02:58:22 +01:00
|
|
|
/*
|
|
|
|
* git gc builtin command
|
|
|
|
*
|
|
|
|
* Cleanup unreachable files and optimize the repository.
|
|
|
|
*
|
|
|
|
* Copyright (c) 2007 James Bowes
|
|
|
|
*
|
|
|
|
* Based on git-gc.sh, which is
|
|
|
|
*
|
|
|
|
* Copyright (c) 2006 Shawn O. Pearce
|
|
|
|
*/
|
|
|
|
|
2007-07-15 01:14:45 +02:00
|
|
|
#include "builtin.h"
|
2015-08-10 11:47:49 +02:00
|
|
|
#include "tempfile.h"
|
2014-10-01 12:28:42 +02:00
|
|
|
#include "lockfile.h"
|
2007-11-02 02:02:27 +01:00
|
|
|
#include "parse-options.h"
|
2007-03-14 02:58:22 +01:00
|
|
|
#include "run-command.h"
|
gc: remove gc.pid file at end of execution
This file isn't really harmful, but isn't useful either, and can create
minor annoyance for the user:
* It's confusing, as the presence of a *.pid file often implies that a
process is currently running. A user running "ls .git/" and finding
this file may incorrectly guess that a "git gc" is currently running.
* Leaving this file means that a "git gc" in an already gc-ed repo is
no-longer a no-op. A user running "git gc" in a set of repositories,
and then synchronizing this set (e.g. rsync -av, unison, ...) will see
all the gc.pid files as changed, which creates useless noise.
This patch unlinks the file after the garbage collection is done, so that
gc.pid is actually present only during execution.
Future versions of Git may want to use the information left in the gc.pid
file (e.g. for policies like "don't attempt to run a gc if one has
already been ran less than X hours ago"). If so, this patch can safely be
reverted. For now, let's not bother the users.
Explained-by: Matthieu Moy <Matthieu.Moy@imag.fr>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-10-17 01:11:46 +02:00
|
|
|
#include "sigchain.h"
|
2012-04-18 23:10:19 +02:00
|
|
|
#include "argv-array.h"
|
2013-12-05 14:02:54 +01:00
|
|
|
#include "commit.h"
|
2007-03-14 02:58:22 +01:00
|
|
|
|
|
|
|
#define FAILED_RUN "failed to run %s"
|
|
|
|
|
2007-11-02 02:02:27 +01:00
|
|
|
static const char * const builtin_gc_usage[] = {
|
2015-01-13 08:44:47 +01:00
|
|
|
N_("git gc [<options>]"),
|
2007-11-02 02:02:27 +01:00
|
|
|
NULL
|
|
|
|
};
|
2007-03-14 02:58:22 +01:00
|
|
|
|
Make "git gc" pack all refs by default
I've taught myself to use "git gc" instead of doing the repack explicitly,
but it doesn't actually do what I think it should do.
We've had packed refs for a long time now, and I think it just makes sense
to pack normal branches too. So I end up having to do
git pack-refs --all --prune
in order to get a nice git repo that doesn't have any unnecessary files.
So why not just do that in "git gc"? It's not as if there really is any
downside to packing branches, even if they end up changing later. Quite
often they don't, and even if they do, so what?
Also, make the default for refs packing just be an unambiguous "do it",
rather than "do it by default only for non-bare repositories". If you want
that behaviour, you can always just add a
[gc]
packrefs = notbare
in your ~/.gitconfig file, but I don't actually see why bare would be any
different (except for the broken reason that http-fetching used to be
totally broken, and not doing it just meant that it didn't even get
fixed in a timely manner!).
So here's a trivial patch to make "git gc" do a better job. Hmm?
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-24 20:41:39 +02:00
|
|
|
static int pack_refs = 1;
|
2014-05-25 02:38:29 +02:00
|
|
|
static int prune_reflogs = 1;
|
gc: default aggressive depth to 50
This commit message is long and has lots of background and
numbers. The summary is: the current default of 250 doesn't
save much space, and costs CPU. It's not a good tradeoff.
Read on for details.
The "--aggressive" flag to git-gc does three things:
1. use "-f" to throw out existing deltas and recompute from
scratch
2. use "--window=250" to look harder for deltas
3. use "--depth=250" to make longer delta chains
Items (1) and (2) are good matches for an "aggressive"
repack. They ask the repack to do more computation work in
the hopes of getting a better pack. You pay the costs during
the repack, and other operations see only the benefit.
Item (3) is not so clear. Allowing longer chains means fewer
restrictions on the deltas, which means potentially finding
better ones and saving some space. But it also means that
operations which access the deltas have to follow longer
chains, which affects their performance. So it's a tradeoff,
and it's not clear that the tradeoff is even a good one.
The existing "250" numbers for "--aggressive" come
originally from this thread:
http://public-inbox.org/git/alpine.LFD.0.9999.0712060803430.13796@woody.linux-foundation.org/
where Linus says:
So when I said "--depth=250 --window=250", I chose those
numbers more as an example of extremely aggressive
packing, and I'm not at all sure that the end result is
necessarily wonderfully usable. It's going to save disk
space (and network bandwidth - the delta's will be re-used
for the network protocol too!), but there are definitely
downsides too, and using long delta chains may
simply not be worth it in practice.
There are some numbers in that thread, but they're mostly
focused on the improved window size, and measure the
improvement from --depth=250 and --window=250 together.
E.g.:
http://public-inbox.org/git/9e4733910712062006l651571f3w7f76ce64c6650dff@mail.gmail.com/
talks about the improved run-time of "git-blame", which
comes from the reduced pack size. But most of that reduction
is coming from --window=250, whereas most of the extra costs
come from --depth=250. There's a link in that thread showing
that increasing the depth beyond 50 doesn't seem to help
much with the size:
https://vcscompare.blogspot.com/2008/06/git-repack-parameters.html
but again, no discussion of the timing impact.
In an earlier thread from Ted Ts'o which discussed setting
the non-aggressive default (from 10 to 50):
http://public-inbox.org/git/20070509134958.GA21489%40thunk.org/
we have more numbers, with the conclusion that going past 50
does not help size much, and hurts the speed of normal
operations.
So from that, we might guess that 50 is actually a sweet
spot, even for aggressive, if we interpret aggressive to
"spend time now to make a better pack". It is not clear that
"--depth=250" is actually a better pack. It may be slightly
_smaller_, but it carries a run-time penalty.
Here are some more recent timings I did to verify that. They
show three things:
- the size of the resulting pack (so disk saved to store,
bandwidth saved on clones/fetches)
- the cost of "rev-list --objects --all", which shows the
effect of the delta chains on trees (commits typically
don't delta, and the command doesn't touch the blobs at
all)
- the cost of "log -Sfoo", which will additionally access
each blob
All cases were repacked with "git repack -adf --depth=$d
--window=250" (so basically, what would happen if we tweaked
the "gc --aggressive" default depth).
The timings are all wall-clock best-of-3. The machine itself
has plenty of RAM compared to the repositories (which is
probably typical of most workstations these days), so we're
really measuring CPU usage, as the whole thing will be in
disk cache after the first run.
The core.deltaBaseCacheLimit is at its default of 96MiB.
It's possible that tweaking it would have some impact on the
tests, as some of them (especially "log -S" on a large repo)
are likely to overflow that. But bumping that carries a
run-time memory cost, so for these tests, I focused on what
we could do just with the on-disk pack tradeoffs.
Each test is done for four depths: 250 (the current value),
50 (the current default that tested well previously), 100
(to show something on the larger side, which previous tests
showed was not a good tradeoff), and 10 (the very old
default, which previous tests showed was worse than 50).
Here are the numbers for linux.git:
depth | size | % | rev-list | % | log -Sfoo | %
-------+-------+-------+----------+--------+-----------+-------
250 | 967MB | n/a | 48.159s | n/a | 378.088 | n/a
100 | 971MB | +0.4% | 41.471s | -13.9% | 342.060 | -9.5%
50 | 979MB | +1.2% | 37.778s | -21.6% | 311.040s | -17.7%
10 | 1.1GB | +6.6% | 32.518s | -32.5% | 279.890s | -25.9%
and for git.git:
depth | size | % | rev-list | % | log -Sfoo | %
-------+-------+-------+----------+--------+-----------+-------
250 | 48MB | n/a | 2.215s | n/a | 20.922s | n/a
100 | 49MB | +0.5% | 2.140s | -3.4% | 17.736s | -15.2%
50 | 49MB | +1.7% | 2.099s | -5.2% | 15.418s | -26.3%
10 | 53MB | +9.3% | 2.001s | -9.7% | 12.677s | -39.4%
You can see that that the CPU savings for regular operations improves as we
decrease the depth. The savings are less for "rev-list" on a smaller repository
than they are for blob-accessing operations, or even rev-list on a larger
repository. This may mean that a larger delta cache would help (though setting
core.deltaBaseCacheLimit by itself doesn't).
But we can also see that the space savings are not that great as the depth goes
higher. Saving 5-10% between 10 and 50 is probably worth the CPU tradeoff.
Saving 1% to go from 50 to 100, or another 0.5% to go from 100 to 250 is
probably not.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-08-11 18:13:09 +02:00
|
|
|
static int aggressive_depth = 50;
|
2007-12-06 13:03:38 +01:00
|
|
|
static int aggressive_window = 250;
|
2007-09-05 22:01:37 +02:00
|
|
|
static int gc_auto_threshold = 6700;
|
2008-03-23 08:04:48 +01:00
|
|
|
static int gc_auto_pack_limit = 50;
|
2014-02-08 08:08:52 +01:00
|
|
|
static int detach_auto = 1;
|
gc: ignore old gc.log files
A server can end up in a state where there are lots of unreferenced
loose objects (say, because many users are doing a bunch of rebasing
and pushing their rebased branches). Running "git gc --auto" in
this state would cause a gc.log file to be created, preventing
future auto gcs, causing pack files to pile up. Since many git
operations are O(n) in the number of pack files, this would lead to
poor performance.
Git should never get itself into a state where it refuses to do any
maintenance, just because at some point some piece of the maintenance
didn't make progress.
Teach Git to ignore gc.log files which are older than (by default)
one day old, which can be tweaked via the gc.logExpiry configuration
variable. That way, these pack files will get cleaned up, if
necessary, at least once per day. And operators who find a need for
more-frequent gcs can adjust gc.logExpiry to meet their needs.
There is also some cleanup: a successful manual gc, or a
warning-free auto gc with an old log file, will remove any old
gc.log files.
It might still happen that manual intervention is required
(e.g. because the repo is corrupt), but at the very least it won't
be because Git is too dumb to try again.
Signed-off-by: David Turner <dturner@twosigma.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-10 22:28:22 +01:00
|
|
|
static unsigned long gc_log_expire_time;
|
|
|
|
static const char *gc_log_expire = "1.day.ago";
|
2008-09-30 22:28:58 +02:00
|
|
|
static const char *prune_expire = "2.weeks.ago";
|
2014-11-30 09:24:53 +01:00
|
|
|
static const char *prune_worktrees_expire = "3.months.ago";
|
2007-03-14 02:58:22 +01:00
|
|
|
|
2012-04-18 23:10:19 +02:00
|
|
|
static struct argv_array pack_refs_cmd = ARGV_ARRAY_INIT;
|
|
|
|
static struct argv_array reflog = ARGV_ARRAY_INIT;
|
|
|
|
static struct argv_array repack = ARGV_ARRAY_INIT;
|
|
|
|
static struct argv_array prune = ARGV_ARRAY_INIT;
|
2014-11-30 09:24:53 +01:00
|
|
|
static struct argv_array prune_worktrees = ARGV_ARRAY_INIT;
|
2012-04-18 23:10:19 +02:00
|
|
|
static struct argv_array rerere = ARGV_ARRAY_INIT;
|
2007-03-14 02:58:22 +01:00
|
|
|
|
2015-08-10 11:47:49 +02:00
|
|
|
static struct tempfile pidfile;
|
2015-09-19 07:13:23 +02:00
|
|
|
static struct lock_file log_lock;
|
gc: remove gc.pid file at end of execution
This file isn't really harmful, but isn't useful either, and can create
minor annoyance for the user:
* It's confusing, as the presence of a *.pid file often implies that a
process is currently running. A user running "ls .git/" and finding
this file may incorrectly guess that a "git gc" is currently running.
* Leaving this file means that a "git gc" in an already gc-ed repo is
no-longer a no-op. A user running "git gc" in a set of repositories,
and then synchronizing this set (e.g. rsync -av, unison, ...) will see
all the gc.pid files as changed, which creates useless noise.
This patch unlinks the file after the garbage collection is done, so that
gc.pid is actually present only during execution.
Future versions of Git may want to use the information left in the gc.pid
file (e.g. for policies like "don't attempt to run a gc if one has
already been ran less than X hours ago"). If so, this patch can safely be
reverted. For now, let's not bother the users.
Explained-by: Matthieu Moy <Matthieu.Moy@imag.fr>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-10-17 01:11:46 +02:00
|
|
|
|
2015-11-04 04:05:08 +01:00
|
|
|
static struct string_list pack_garbage = STRING_LIST_INIT_DUP;
|
|
|
|
|
|
|
|
static void clean_pack_garbage(void)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
for (i = 0; i < pack_garbage.nr; i++)
|
|
|
|
unlink_or_warn(pack_garbage.items[i].string);
|
|
|
|
string_list_clear(&pack_garbage, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void report_pack_garbage(unsigned seen_bits, const char *path)
|
|
|
|
{
|
|
|
|
if (seen_bits == PACKDIR_FILE_IDX)
|
|
|
|
string_list_append(&pack_garbage, path);
|
|
|
|
}
|
|
|
|
|
2014-11-30 09:24:52 +01:00
|
|
|
static void git_config_date_string(const char *key, const char **output)
|
|
|
|
{
|
|
|
|
if (git_config_get_string_const(key, output))
|
|
|
|
return;
|
|
|
|
if (strcmp(*output, "now")) {
|
|
|
|
unsigned long now = approxidate("now");
|
|
|
|
if (approxidate(*output) >= now)
|
|
|
|
git_die_config(key, _("Invalid %s: '%s'"), key, *output);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-09-19 07:13:23 +02:00
|
|
|
static void process_log_file(void)
|
|
|
|
{
|
|
|
|
struct stat st;
|
gc: ignore old gc.log files
A server can end up in a state where there are lots of unreferenced
loose objects (say, because many users are doing a bunch of rebasing
and pushing their rebased branches). Running "git gc --auto" in
this state would cause a gc.log file to be created, preventing
future auto gcs, causing pack files to pile up. Since many git
operations are O(n) in the number of pack files, this would lead to
poor performance.
Git should never get itself into a state where it refuses to do any
maintenance, just because at some point some piece of the maintenance
didn't make progress.
Teach Git to ignore gc.log files which are older than (by default)
one day old, which can be tweaked via the gc.logExpiry configuration
variable. That way, these pack files will get cleaned up, if
necessary, at least once per day. And operators who find a need for
more-frequent gcs can adjust gc.logExpiry to meet their needs.
There is also some cleanup: a successful manual gc, or a
warning-free auto gc with an old log file, will remove any old
gc.log files.
It might still happen that manual intervention is required
(e.g. because the repo is corrupt), but at the very least it won't
be because Git is too dumb to try again.
Signed-off-by: David Turner <dturner@twosigma.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-10 22:28:22 +01:00
|
|
|
if (fstat(get_lock_file_fd(&log_lock), &st)) {
|
|
|
|
/*
|
|
|
|
* Perhaps there was an i/o error or another
|
|
|
|
* unlikely situation. Try to make a note of
|
|
|
|
* this in gc.log along with any existing
|
|
|
|
* messages.
|
|
|
|
*/
|
|
|
|
int saved_errno = errno;
|
|
|
|
fprintf(stderr, _("Failed to fstat %s: %s"),
|
|
|
|
get_tempfile_path(&log_lock.tempfile),
|
|
|
|
strerror(saved_errno));
|
|
|
|
fflush(stderr);
|
2015-09-19 07:13:23 +02:00
|
|
|
commit_lock_file(&log_lock);
|
gc: ignore old gc.log files
A server can end up in a state where there are lots of unreferenced
loose objects (say, because many users are doing a bunch of rebasing
and pushing their rebased branches). Running "git gc --auto" in
this state would cause a gc.log file to be created, preventing
future auto gcs, causing pack files to pile up. Since many git
operations are O(n) in the number of pack files, this would lead to
poor performance.
Git should never get itself into a state where it refuses to do any
maintenance, just because at some point some piece of the maintenance
didn't make progress.
Teach Git to ignore gc.log files which are older than (by default)
one day old, which can be tweaked via the gc.logExpiry configuration
variable. That way, these pack files will get cleaned up, if
necessary, at least once per day. And operators who find a need for
more-frequent gcs can adjust gc.logExpiry to meet their needs.
There is also some cleanup: a successful manual gc, or a
warning-free auto gc with an old log file, will remove any old
gc.log files.
It might still happen that manual intervention is required
(e.g. because the repo is corrupt), but at the very least it won't
be because Git is too dumb to try again.
Signed-off-by: David Turner <dturner@twosigma.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-10 22:28:22 +01:00
|
|
|
errno = saved_errno;
|
|
|
|
} else if (st.st_size) {
|
|
|
|
/* There was some error recorded in the lock file */
|
|
|
|
commit_lock_file(&log_lock);
|
|
|
|
} else {
|
|
|
|
/* No error, clean up any old gc.log */
|
|
|
|
unlink(git_path("gc.log"));
|
2015-09-19 07:13:23 +02:00
|
|
|
rollback_lock_file(&log_lock);
|
gc: ignore old gc.log files
A server can end up in a state where there are lots of unreferenced
loose objects (say, because many users are doing a bunch of rebasing
and pushing their rebased branches). Running "git gc --auto" in
this state would cause a gc.log file to be created, preventing
future auto gcs, causing pack files to pile up. Since many git
operations are O(n) in the number of pack files, this would lead to
poor performance.
Git should never get itself into a state where it refuses to do any
maintenance, just because at some point some piece of the maintenance
didn't make progress.
Teach Git to ignore gc.log files which are older than (by default)
one day old, which can be tweaked via the gc.logExpiry configuration
variable. That way, these pack files will get cleaned up, if
necessary, at least once per day. And operators who find a need for
more-frequent gcs can adjust gc.logExpiry to meet their needs.
There is also some cleanup: a successful manual gc, or a
warning-free auto gc with an old log file, will remove any old
gc.log files.
It might still happen that manual intervention is required
(e.g. because the repo is corrupt), but at the very least it won't
be because Git is too dumb to try again.
Signed-off-by: David Turner <dturner@twosigma.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-10 22:28:22 +01:00
|
|
|
}
|
2015-09-19 07:13:23 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
static void process_log_file_at_exit(void)
|
|
|
|
{
|
|
|
|
fflush(stderr);
|
|
|
|
process_log_file();
|
|
|
|
}
|
|
|
|
|
|
|
|
static void process_log_file_on_signal(int signo)
|
|
|
|
{
|
|
|
|
process_log_file();
|
|
|
|
sigchain_pop(signo);
|
|
|
|
raise(signo);
|
|
|
|
}
|
|
|
|
|
2014-08-07 18:21:22 +02:00
|
|
|
static void gc_config(void)
|
2007-03-14 02:58:22 +01:00
|
|
|
{
|
2014-08-07 18:21:22 +02:00
|
|
|
const char *value;
|
|
|
|
|
|
|
|
if (!git_config_get_value("gc.packrefs", &value)) {
|
2008-02-08 15:26:18 +01:00
|
|
|
if (value && !strcmp(value, "notbare"))
|
2007-03-14 02:58:22 +01:00
|
|
|
pack_refs = -1;
|
|
|
|
else
|
2014-08-07 18:21:22 +02:00
|
|
|
pack_refs = git_config_bool("gc.packrefs", value);
|
2007-09-17 09:55:13 +02:00
|
|
|
}
|
2014-08-07 18:21:22 +02:00
|
|
|
|
|
|
|
git_config_get_int("gc.aggressivewindow", &aggressive_window);
|
|
|
|
git_config_get_int("gc.aggressivedepth", &aggressive_depth);
|
|
|
|
git_config_get_int("gc.auto", &gc_auto_threshold);
|
|
|
|
git_config_get_int("gc.autopacklimit", &gc_auto_pack_limit);
|
|
|
|
git_config_get_bool("gc.autodetach", &detach_auto);
|
2014-11-30 09:24:52 +01:00
|
|
|
git_config_date_string("gc.pruneexpire", &prune_expire);
|
2015-07-20 07:29:22 +02:00
|
|
|
git_config_date_string("gc.worktreepruneexpire", &prune_worktrees_expire);
|
gc: ignore old gc.log files
A server can end up in a state where there are lots of unreferenced
loose objects (say, because many users are doing a bunch of rebasing
and pushing their rebased branches). Running "git gc --auto" in
this state would cause a gc.log file to be created, preventing
future auto gcs, causing pack files to pile up. Since many git
operations are O(n) in the number of pack files, this would lead to
poor performance.
Git should never get itself into a state where it refuses to do any
maintenance, just because at some point some piece of the maintenance
didn't make progress.
Teach Git to ignore gc.log files which are older than (by default)
one day old, which can be tweaked via the gc.logExpiry configuration
variable. That way, these pack files will get cleaned up, if
necessary, at least once per day. And operators who find a need for
more-frequent gcs can adjust gc.logExpiry to meet their needs.
There is also some cleanup: a successful manual gc, or a
warning-free auto gc with an old log file, will remove any old
gc.log files.
It might still happen that manual intervention is required
(e.g. because the repo is corrupt), but at the very least it won't
be because Git is too dumb to try again.
Signed-off-by: David Turner <dturner@twosigma.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-10 22:28:22 +01:00
|
|
|
git_config_date_string("gc.logexpiry", &gc_log_expire);
|
|
|
|
|
2014-08-07 18:21:22 +02:00
|
|
|
git_config(git_default_config, NULL);
|
2007-03-14 02:58:22 +01:00
|
|
|
}
|
|
|
|
|
2007-09-17 09:44:17 +02:00
|
|
|
static int too_many_loose_objects(void)
|
2007-09-05 22:01:37 +02:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Quickly check if a "gc" is needed, by estimating how
|
|
|
|
* many loose objects there are. Because SHA-1 is evenly
|
|
|
|
* distributed, we can check only one and get a reasonable
|
|
|
|
* estimate.
|
|
|
|
*/
|
|
|
|
char path[PATH_MAX];
|
|
|
|
const char *objdir = get_object_directory();
|
|
|
|
DIR *dir;
|
|
|
|
struct dirent *ent;
|
|
|
|
int auto_threshold;
|
|
|
|
int num_loose = 0;
|
|
|
|
int needed = 0;
|
|
|
|
|
2007-09-17 09:55:13 +02:00
|
|
|
if (gc_auto_threshold <= 0)
|
|
|
|
return 0;
|
|
|
|
|
2007-09-05 22:01:37 +02:00
|
|
|
if (sizeof(path) <= snprintf(path, sizeof(path), "%s/17", objdir)) {
|
2011-02-23 00:42:24 +01:00
|
|
|
warning(_("insanely long object directory %.*s"), 50, objdir);
|
2007-09-05 22:01:37 +02:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
dir = opendir(path);
|
|
|
|
if (!dir)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
auto_threshold = (gc_auto_threshold + 255) / 256;
|
|
|
|
while ((ent = readdir(dir)) != NULL) {
|
|
|
|
if (strspn(ent->d_name, "0123456789abcdef") != 38 ||
|
|
|
|
ent->d_name[38] != '\0')
|
|
|
|
continue;
|
|
|
|
if (++num_loose > auto_threshold) {
|
|
|
|
needed = 1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
closedir(dir);
|
|
|
|
return needed;
|
|
|
|
}
|
|
|
|
|
2007-09-17 09:55:13 +02:00
|
|
|
static int too_many_packs(void)
|
|
|
|
{
|
|
|
|
struct packed_git *p;
|
|
|
|
int cnt;
|
|
|
|
|
|
|
|
if (gc_auto_pack_limit <= 0)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
prepare_packed_git();
|
|
|
|
for (cnt = 0, p = packed_git; p; p = p->next) {
|
|
|
|
if (!p->pack_local)
|
|
|
|
continue;
|
2008-11-12 18:59:07 +01:00
|
|
|
if (p->pack_keep)
|
2007-09-17 09:55:13 +02:00
|
|
|
continue;
|
|
|
|
/*
|
|
|
|
* Perhaps check the size of the pack and count only
|
|
|
|
* very small ones here?
|
|
|
|
*/
|
|
|
|
cnt++;
|
|
|
|
}
|
2016-06-25 08:46:47 +02:00
|
|
|
return gc_auto_pack_limit < cnt;
|
2007-09-17 09:55:13 +02:00
|
|
|
}
|
|
|
|
|
2012-04-07 12:30:09 +02:00
|
|
|
static void add_repack_all_option(void)
|
|
|
|
{
|
|
|
|
if (prune_expire && !strcmp(prune_expire, "now"))
|
2012-04-18 23:10:19 +02:00
|
|
|
argv_array_push(&repack, "-a");
|
2012-04-07 12:30:09 +02:00
|
|
|
else {
|
2012-04-18 23:10:19 +02:00
|
|
|
argv_array_push(&repack, "-A");
|
|
|
|
if (prune_expire)
|
|
|
|
argv_array_pushf(&repack, "--unpack-unreachable=%s", prune_expire);
|
2012-04-07 12:30:09 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-12-28 23:45:41 +01:00
|
|
|
static void add_repack_incremental_option(void)
|
|
|
|
{
|
|
|
|
argv_array_push(&repack, "--no-write-bitmap-index");
|
|
|
|
}
|
|
|
|
|
2007-09-17 09:44:17 +02:00
|
|
|
static int need_to_gc(void)
|
|
|
|
{
|
|
|
|
/*
|
2008-03-19 22:53:20 +01:00
|
|
|
* Setting gc.auto to 0 or negative can disable the
|
|
|
|
* automatic gc.
|
2007-09-17 09:44:17 +02:00
|
|
|
*/
|
2008-03-19 22:53:20 +01:00
|
|
|
if (gc_auto_threshold <= 0)
|
2007-09-17 09:48:39 +02:00
|
|
|
return 0;
|
|
|
|
|
2007-09-17 09:55:13 +02:00
|
|
|
/*
|
|
|
|
* If there are too many loose objects, but not too many
|
|
|
|
* packs, we run "repack -d -l". If there are too many packs,
|
|
|
|
* we run "repack -A -d -l". Otherwise we tell the caller
|
|
|
|
* there is no need.
|
|
|
|
*/
|
|
|
|
if (too_many_packs())
|
2012-04-07 12:30:09 +02:00
|
|
|
add_repack_all_option();
|
2016-12-28 23:45:41 +01:00
|
|
|
else if (too_many_loose_objects())
|
|
|
|
add_repack_incremental_option();
|
|
|
|
else
|
2007-09-17 09:55:13 +02:00
|
|
|
return 0;
|
2008-04-02 21:34:38 +02:00
|
|
|
|
2014-03-18 11:00:53 +01:00
|
|
|
if (run_hook_le(NULL, "pre-auto-gc", NULL))
|
2008-04-02 21:34:38 +02:00
|
|
|
return 0;
|
2007-09-17 09:48:39 +02:00
|
|
|
return 1;
|
2007-09-17 09:44:17 +02:00
|
|
|
}
|
|
|
|
|
2013-08-08 13:05:38 +02:00
|
|
|
/* return NULL on success, else hostname running the gc */
|
|
|
|
static const char *lock_repo_for_gc(int force, pid_t* ret_pid)
|
|
|
|
{
|
|
|
|
static struct lock_file lock;
|
|
|
|
char my_host[128];
|
|
|
|
struct strbuf sb = STRBUF_INIT;
|
|
|
|
struct stat st;
|
|
|
|
uintmax_t pid;
|
|
|
|
FILE *fp;
|
2014-01-29 17:59:37 +01:00
|
|
|
int fd;
|
2015-08-10 11:47:48 +02:00
|
|
|
char *pidfile_path;
|
2013-08-08 13:05:38 +02:00
|
|
|
|
2015-08-10 11:47:49 +02:00
|
|
|
if (is_tempfile_active(&pidfile))
|
gc: remove gc.pid file at end of execution
This file isn't really harmful, but isn't useful either, and can create
minor annoyance for the user:
* It's confusing, as the presence of a *.pid file often implies that a
process is currently running. A user running "ls .git/" and finding
this file may incorrectly guess that a "git gc" is currently running.
* Leaving this file means that a "git gc" in an already gc-ed repo is
no-longer a no-op. A user running "git gc" in a set of repositories,
and then synchronizing this set (e.g. rsync -av, unison, ...) will see
all the gc.pid files as changed, which creates useless noise.
This patch unlinks the file after the garbage collection is done, so that
gc.pid is actually present only during execution.
Future versions of Git may want to use the information left in the gc.pid
file (e.g. for policies like "don't attempt to run a gc if one has
already been ran less than X hours ago"). If so, this patch can safely be
reverted. For now, let's not bother the users.
Explained-by: Matthieu Moy <Matthieu.Moy@imag.fr>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-10-17 01:11:46 +02:00
|
|
|
/* already locked */
|
|
|
|
return NULL;
|
|
|
|
|
2013-08-08 13:05:38 +02:00
|
|
|
if (gethostname(my_host, sizeof(my_host)))
|
2015-09-24 23:06:08 +02:00
|
|
|
xsnprintf(my_host, sizeof(my_host), "unknown");
|
2013-08-08 13:05:38 +02:00
|
|
|
|
2015-08-10 11:47:48 +02:00
|
|
|
pidfile_path = git_pathdup("gc.pid");
|
|
|
|
fd = hold_lock_file_for_update(&lock, pidfile_path,
|
2013-08-08 13:05:38 +02:00
|
|
|
LOCK_DIE_ON_ERROR);
|
|
|
|
if (!force) {
|
2014-01-29 17:59:37 +01:00
|
|
|
static char locking_host[128];
|
|
|
|
int should_exit;
|
2015-08-10 11:47:48 +02:00
|
|
|
fp = fopen(pidfile_path, "r");
|
2013-08-08 13:05:38 +02:00
|
|
|
memset(locking_host, 0, sizeof(locking_host));
|
|
|
|
should_exit =
|
|
|
|
fp != NULL &&
|
|
|
|
!fstat(fileno(fp), &st) &&
|
|
|
|
/*
|
|
|
|
* 12 hour limit is very generous as gc should
|
|
|
|
* never take that long. On the other hand we
|
|
|
|
* don't really need a strict limit here,
|
|
|
|
* running gc --auto one day late is not a big
|
|
|
|
* problem. --force can be used in manual gc
|
|
|
|
* after the user verifies that no gc is
|
|
|
|
* running.
|
|
|
|
*/
|
|
|
|
time(NULL) - st.st_mtime <= 12 * 3600 &&
|
2015-10-26 14:15:33 +01:00
|
|
|
fscanf(fp, "%"SCNuMAX" %127c", &pid, locking_host) == 2 &&
|
2013-08-08 13:05:38 +02:00
|
|
|
/* be gentle to concurrent "gc" on remote hosts */
|
2013-12-31 13:07:39 +01:00
|
|
|
(strcmp(locking_host, my_host) || !kill(pid, 0) || errno == EPERM);
|
2013-08-08 13:05:38 +02:00
|
|
|
if (fp != NULL)
|
|
|
|
fclose(fp);
|
|
|
|
if (should_exit) {
|
|
|
|
if (fd >= 0)
|
|
|
|
rollback_lock_file(&lock);
|
|
|
|
*ret_pid = pid;
|
2015-08-10 11:47:48 +02:00
|
|
|
free(pidfile_path);
|
2013-08-08 13:05:38 +02:00
|
|
|
return locking_host;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
strbuf_addf(&sb, "%"PRIuMAX" %s",
|
|
|
|
(uintmax_t) getpid(), my_host);
|
|
|
|
write_in_full(fd, sb.buf, sb.len);
|
|
|
|
strbuf_release(&sb);
|
|
|
|
commit_lock_file(&lock);
|
2015-08-10 11:47:49 +02:00
|
|
|
register_tempfile(&pidfile, pidfile_path);
|
|
|
|
free(pidfile_path);
|
2013-08-08 13:05:38 +02:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2015-09-19 07:13:23 +02:00
|
|
|
static int report_last_gc_error(void)
|
|
|
|
{
|
|
|
|
struct strbuf sb = STRBUF_INIT;
|
gc: ignore old gc.log files
A server can end up in a state where there are lots of unreferenced
loose objects (say, because many users are doing a bunch of rebasing
and pushing their rebased branches). Running "git gc --auto" in
this state would cause a gc.log file to be created, preventing
future auto gcs, causing pack files to pile up. Since many git
operations are O(n) in the number of pack files, this would lead to
poor performance.
Git should never get itself into a state where it refuses to do any
maintenance, just because at some point some piece of the maintenance
didn't make progress.
Teach Git to ignore gc.log files which are older than (by default)
one day old, which can be tweaked via the gc.logExpiry configuration
variable. That way, these pack files will get cleaned up, if
necessary, at least once per day. And operators who find a need for
more-frequent gcs can adjust gc.logExpiry to meet their needs.
There is also some cleanup: a successful manual gc, or a
warning-free auto gc with an old log file, will remove any old
gc.log files.
It might still happen that manual intervention is required
(e.g. because the repo is corrupt), but at the very least it won't
be because Git is too dumb to try again.
Signed-off-by: David Turner <dturner@twosigma.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-10 22:28:22 +01:00
|
|
|
int ret = 0;
|
|
|
|
struct stat st;
|
|
|
|
char *gc_log_path = git_pathdup("gc.log");
|
2015-09-19 07:13:23 +02:00
|
|
|
|
gc: ignore old gc.log files
A server can end up in a state where there are lots of unreferenced
loose objects (say, because many users are doing a bunch of rebasing
and pushing their rebased branches). Running "git gc --auto" in
this state would cause a gc.log file to be created, preventing
future auto gcs, causing pack files to pile up. Since many git
operations are O(n) in the number of pack files, this would lead to
poor performance.
Git should never get itself into a state where it refuses to do any
maintenance, just because at some point some piece of the maintenance
didn't make progress.
Teach Git to ignore gc.log files which are older than (by default)
one day old, which can be tweaked via the gc.logExpiry configuration
variable. That way, these pack files will get cleaned up, if
necessary, at least once per day. And operators who find a need for
more-frequent gcs can adjust gc.logExpiry to meet their needs.
There is also some cleanup: a successful manual gc, or a
warning-free auto gc with an old log file, will remove any old
gc.log files.
It might still happen that manual intervention is required
(e.g. because the repo is corrupt), but at the very least it won't
be because Git is too dumb to try again.
Signed-off-by: David Turner <dturner@twosigma.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-10 22:28:22 +01:00
|
|
|
if (stat(gc_log_path, &st)) {
|
|
|
|
if (errno == ENOENT)
|
|
|
|
goto done;
|
|
|
|
|
|
|
|
ret = error_errno(_("Can't stat %s"), gc_log_path);
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (st.st_mtime < gc_log_expire_time)
|
|
|
|
goto done;
|
|
|
|
|
|
|
|
ret = strbuf_read_file(&sb, gc_log_path, 0);
|
2015-09-19 07:13:23 +02:00
|
|
|
if (ret > 0)
|
gc: ignore old gc.log files
A server can end up in a state where there are lots of unreferenced
loose objects (say, because many users are doing a bunch of rebasing
and pushing their rebased branches). Running "git gc --auto" in
this state would cause a gc.log file to be created, preventing
future auto gcs, causing pack files to pile up. Since many git
operations are O(n) in the number of pack files, this would lead to
poor performance.
Git should never get itself into a state where it refuses to do any
maintenance, just because at some point some piece of the maintenance
didn't make progress.
Teach Git to ignore gc.log files which are older than (by default)
one day old, which can be tweaked via the gc.logExpiry configuration
variable. That way, these pack files will get cleaned up, if
necessary, at least once per day. And operators who find a need for
more-frequent gcs can adjust gc.logExpiry to meet their needs.
There is also some cleanup: a successful manual gc, or a
warning-free auto gc with an old log file, will remove any old
gc.log files.
It might still happen that manual intervention is required
(e.g. because the repo is corrupt), but at the very least it won't
be because Git is too dumb to try again.
Signed-off-by: David Turner <dturner@twosigma.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-10 22:28:22 +01:00
|
|
|
ret = error(_("The last gc run reported the following. "
|
2015-09-19 07:13:23 +02:00
|
|
|
"Please correct the root cause\n"
|
|
|
|
"and remove %s.\n"
|
|
|
|
"Automatic cleanup will not be performed "
|
|
|
|
"until the file is removed.\n\n"
|
|
|
|
"%s"),
|
gc: ignore old gc.log files
A server can end up in a state where there are lots of unreferenced
loose objects (say, because many users are doing a bunch of rebasing
and pushing their rebased branches). Running "git gc --auto" in
this state would cause a gc.log file to be created, preventing
future auto gcs, causing pack files to pile up. Since many git
operations are O(n) in the number of pack files, this would lead to
poor performance.
Git should never get itself into a state where it refuses to do any
maintenance, just because at some point some piece of the maintenance
didn't make progress.
Teach Git to ignore gc.log files which are older than (by default)
one day old, which can be tweaked via the gc.logExpiry configuration
variable. That way, these pack files will get cleaned up, if
necessary, at least once per day. And operators who find a need for
more-frequent gcs can adjust gc.logExpiry to meet their needs.
There is also some cleanup: a successful manual gc, or a
warning-free auto gc with an old log file, will remove any old
gc.log files.
It might still happen that manual intervention is required
(e.g. because the repo is corrupt), but at the very least it won't
be because Git is too dumb to try again.
Signed-off-by: David Turner <dturner@twosigma.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-10 22:28:22 +01:00
|
|
|
gc_log_path, sb.buf);
|
2015-09-19 07:13:23 +02:00
|
|
|
strbuf_release(&sb);
|
gc: ignore old gc.log files
A server can end up in a state where there are lots of unreferenced
loose objects (say, because many users are doing a bunch of rebasing
and pushing their rebased branches). Running "git gc --auto" in
this state would cause a gc.log file to be created, preventing
future auto gcs, causing pack files to pile up. Since many git
operations are O(n) in the number of pack files, this would lead to
poor performance.
Git should never get itself into a state where it refuses to do any
maintenance, just because at some point some piece of the maintenance
didn't make progress.
Teach Git to ignore gc.log files which are older than (by default)
one day old, which can be tweaked via the gc.logExpiry configuration
variable. That way, these pack files will get cleaned up, if
necessary, at least once per day. And operators who find a need for
more-frequent gcs can adjust gc.logExpiry to meet their needs.
There is also some cleanup: a successful manual gc, or a
warning-free auto gc with an old log file, will remove any old
gc.log files.
It might still happen that manual intervention is required
(e.g. because the repo is corrupt), but at the very least it won't
be because Git is too dumb to try again.
Signed-off-by: David Turner <dturner@twosigma.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-10 22:28:22 +01:00
|
|
|
done:
|
|
|
|
free(gc_log_path);
|
|
|
|
return ret;
|
2015-09-19 07:13:23 +02:00
|
|
|
}
|
|
|
|
|
2014-05-25 02:38:29 +02:00
|
|
|
static int gc_before_repack(void)
|
|
|
|
{
|
|
|
|
if (pack_refs && run_command_v_opt(pack_refs_cmd.argv, RUN_GIT_CMD))
|
|
|
|
return error(FAILED_RUN, pack_refs_cmd.argv[0]);
|
|
|
|
|
|
|
|
if (prune_reflogs && run_command_v_opt(reflog.argv, RUN_GIT_CMD))
|
|
|
|
return error(FAILED_RUN, reflog.argv[0]);
|
|
|
|
|
|
|
|
pack_refs = 0;
|
|
|
|
prune_reflogs = 0;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2007-03-14 02:58:22 +01:00
|
|
|
int cmd_gc(int argc, const char **argv, const char *prefix)
|
|
|
|
{
|
2007-11-02 02:02:27 +01:00
|
|
|
int aggressive = 0;
|
2007-09-05 22:01:37 +02:00
|
|
|
int auto_gc = 0;
|
2008-02-29 22:53:39 +01:00
|
|
|
int quiet = 0;
|
2013-08-08 13:05:38 +02:00
|
|
|
int force = 0;
|
|
|
|
const char *name;
|
|
|
|
pid_t pid;
|
2015-09-19 07:13:23 +02:00
|
|
|
int daemonized = 0;
|
2007-03-14 02:58:22 +01:00
|
|
|
|
2007-11-02 02:02:27 +01:00
|
|
|
struct option builtin_gc_options[] = {
|
2012-08-20 14:32:14 +02:00
|
|
|
OPT__QUIET(&quiet, N_("suppress progress reporting")),
|
|
|
|
{ OPTION_STRING, 0, "prune", &prune_expire, N_("date"),
|
|
|
|
N_("prune unreferenced objects"),
|
2009-02-14 23:10:10 +01:00
|
|
|
PARSE_OPT_OPTARG, NULL, (intptr_t)prune_expire },
|
2013-08-03 13:51:19 +02:00
|
|
|
OPT_BOOL(0, "aggressive", &aggressive, N_("be more thorough (increased runtime)")),
|
|
|
|
OPT_BOOL(0, "auto", &auto_gc, N_("enable auto-gc mode")),
|
2013-08-08 13:05:38 +02:00
|
|
|
OPT_BOOL(0, "force", &force, N_("force running gc even if there may be another gc running")),
|
2007-11-02 02:02:27 +01:00
|
|
|
OPT_END()
|
|
|
|
};
|
|
|
|
|
2010-10-22 08:47:19 +02:00
|
|
|
if (argc == 2 && !strcmp(argv[1], "-h"))
|
|
|
|
usage_with_options(builtin_gc_usage, builtin_gc_options);
|
|
|
|
|
2012-04-18 23:10:19 +02:00
|
|
|
argv_array_pushl(&pack_refs_cmd, "pack-refs", "--all", "--prune", NULL);
|
|
|
|
argv_array_pushl(&reflog, "reflog", "expire", "--all", NULL);
|
|
|
|
argv_array_pushl(&repack, "repack", "-d", "-l", NULL);
|
2014-11-30 09:24:51 +01:00
|
|
|
argv_array_pushl(&prune, "prune", "--expire", NULL);
|
2015-06-29 14:51:18 +02:00
|
|
|
argv_array_pushl(&prune_worktrees, "worktree", "prune", "--expire", NULL);
|
2012-04-18 23:10:19 +02:00
|
|
|
argv_array_pushl(&rerere, "rerere", "gc", NULL);
|
|
|
|
|
gc: ignore old gc.log files
A server can end up in a state where there are lots of unreferenced
loose objects (say, because many users are doing a bunch of rebasing
and pushing their rebased branches). Running "git gc --auto" in
this state would cause a gc.log file to be created, preventing
future auto gcs, causing pack files to pile up. Since many git
operations are O(n) in the number of pack files, this would lead to
poor performance.
Git should never get itself into a state where it refuses to do any
maintenance, just because at some point some piece of the maintenance
didn't make progress.
Teach Git to ignore gc.log files which are older than (by default)
one day old, which can be tweaked via the gc.logExpiry configuration
variable. That way, these pack files will get cleaned up, if
necessary, at least once per day. And operators who find a need for
more-frequent gcs can adjust gc.logExpiry to meet their needs.
There is also some cleanup: a successful manual gc, or a
warning-free auto gc with an old log file, will remove any old
gc.log files.
It might still happen that manual intervention is required
(e.g. because the repo is corrupt), but at the very least it won't
be because Git is too dumb to try again.
Signed-off-by: David Turner <dturner@twosigma.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-10 22:28:22 +01:00
|
|
|
/* default expiry time, overwritten in gc_config */
|
2014-08-07 18:21:22 +02:00
|
|
|
gc_config();
|
gc: ignore old gc.log files
A server can end up in a state where there are lots of unreferenced
loose objects (say, because many users are doing a bunch of rebasing
and pushing their rebased branches). Running "git gc --auto" in
this state would cause a gc.log file to be created, preventing
future auto gcs, causing pack files to pile up. Since many git
operations are O(n) in the number of pack files, this would lead to
poor performance.
Git should never get itself into a state where it refuses to do any
maintenance, just because at some point some piece of the maintenance
didn't make progress.
Teach Git to ignore gc.log files which are older than (by default)
one day old, which can be tweaked via the gc.logExpiry configuration
variable. That way, these pack files will get cleaned up, if
necessary, at least once per day. And operators who find a need for
more-frequent gcs can adjust gc.logExpiry to meet their needs.
There is also some cleanup: a successful manual gc, or a
warning-free auto gc with an old log file, will remove any old
gc.log files.
It might still happen that manual intervention is required
(e.g. because the repo is corrupt), but at the very least it won't
be because Git is too dumb to try again.
Signed-off-by: David Turner <dturner@twosigma.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-10 22:28:22 +01:00
|
|
|
if (parse_expiry_date(gc_log_expire, &gc_log_expire_time))
|
|
|
|
die(_("Failed to parse gc.logexpiry value %s"), gc_log_expire);
|
2007-03-14 02:58:22 +01:00
|
|
|
|
|
|
|
if (pack_refs < 0)
|
|
|
|
pack_refs = !is_bare_repository();
|
|
|
|
|
2009-05-23 20:53:12 +02:00
|
|
|
argc = parse_options(argc, argv, prefix, builtin_gc_options,
|
|
|
|
builtin_gc_usage, 0);
|
2007-11-02 02:02:27 +01:00
|
|
|
if (argc > 0)
|
|
|
|
usage_with_options(builtin_gc_usage, builtin_gc_options);
|
|
|
|
|
|
|
|
if (aggressive) {
|
2012-04-18 23:10:19 +02:00
|
|
|
argv_array_push(&repack, "-f");
|
gc --aggressive: make --depth configurable
When 1c192f3 (gc --aggressive: make it really aggressive - 2007-12-06)
made --depth=250 the default value, it didn't really explain the
reason behind, especially the pros and cons of --depth=250.
An old mail from Linus below explains it at length. Long story short,
--depth=250 is a disk saver and a performance killer. Not everybody
agrees on that aggressiveness. Let the user configure it.
From: Linus Torvalds <torvalds@linux-foundation.org>
Subject: Re: [PATCH] gc --aggressive: make it really aggressive
Date: Thu, 6 Dec 2007 08:19:24 -0800 (PST)
Message-ID: <alpine.LFD.0.9999.0712060803430.13796@woody.linux-foundation.org>
Gmane-URL: http://article.gmane.org/gmane.comp.gcc.devel/94637
On Thu, 6 Dec 2007, Harvey Harrison wrote:
>
> 7:41:25elapsed 86%CPU
Heh. And this is why you want to do it exactly *once*, and then just
export the end result for others ;)
> -r--r--r-- 1 hharrison hharrison 324094684 2007-12-06 07:26 pack-1d46...pack
But yeah, especially if you allow longer delta chains, the end result can
be much smaller (and what makes the one-time repack more expensive is the
window size, not the delta chain - you could make the delta chains longer
with no cost overhead at packing time)
HOWEVER.
The longer delta chains do make it potentially much more expensive to then
use old history. So there's a trade-off. And quite frankly, a delta depth
of 250 is likely going to cause overflows in the delta cache (which is
only 256 entries in size *and* it's a hash, so it's going to start having
hash conflicts long before hitting the 250 depth limit).
So when I said "--depth=250 --window=250", I chose those numbers more as
an example of extremely aggressive packing, and I'm not at all sure that
the end result is necessarily wonderfully usable. It's going to save disk
space (and network bandwidth - the delta's will be re-used for the network
protocol too!), but there are definitely downsides too, and using long
delta chains may simply not be worth it in practice.
(And some of it might just want to have git tuning, ie if people think
that long deltas are worth it, we could easily just expand on the delta
hash, at the cost of some more memory used!)
That said, the good news is that working with *new* history will not be
affected negatively, and if you want to be _really_ sneaky, there are ways
to say "create a pack that contains the history up to a version one year
ago, and be very aggressive about those old versions that we still want to
have around, but do a separate pack for newer stuff using less aggressive
parameters"
So this is something that can be tweaked, although we don't really have
any really nice interfaces for stuff like that (ie the git delta cache
size is hardcoded in the sources and cannot be set in the config file, and
the "pack old history more aggressively" involves some manual scripting
and knowing how "git pack-objects" works rather than any nice simple
command line switch).
So the thing to take away from this is:
- git is certainly flexible as hell
- .. but to get the full power you may need to tweak things
- .. happily you really only need to have one person to do the tweaking,
and the tweaked end results will be available to others that do not
need to know/care.
And whether the difference between 320MB and 500MB is worth any really
involved tweaking (considering the potential downsides), I really don't
know. Only testing will tell.
Linus
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-03-16 14:35:03 +01:00
|
|
|
if (aggressive_depth > 0)
|
|
|
|
argv_array_pushf(&repack, "--depth=%d", aggressive_depth);
|
2012-04-18 23:10:19 +02:00
|
|
|
if (aggressive_window > 0)
|
|
|
|
argv_array_pushf(&repack, "--window=%d", aggressive_window);
|
2007-03-14 02:58:22 +01:00
|
|
|
}
|
2008-02-29 22:53:39 +01:00
|
|
|
if (quiet)
|
2012-04-18 23:10:19 +02:00
|
|
|
argv_array_push(&repack, "-q");
|
2007-03-14 02:58:22 +01:00
|
|
|
|
2007-09-05 22:01:37 +02:00
|
|
|
if (auto_gc) {
|
|
|
|
/*
|
|
|
|
* Auto-gc should be least intrusive as possible.
|
|
|
|
*/
|
|
|
|
if (!need_to_gc())
|
|
|
|
return 0;
|
2014-02-08 08:08:52 +01:00
|
|
|
if (!quiet) {
|
|
|
|
if (detach_auto)
|
|
|
|
fprintf(stderr, _("Auto packing the repository in background for optimum performance.\n"));
|
|
|
|
else
|
|
|
|
fprintf(stderr, _("Auto packing the repository for optimum performance.\n"));
|
|
|
|
fprintf(stderr, _("See \"git help gc\" for manual housekeeping.\n"));
|
|
|
|
}
|
2014-05-25 02:38:29 +02:00
|
|
|
if (detach_auto) {
|
2015-09-19 07:13:23 +02:00
|
|
|
if (report_last_gc_error())
|
|
|
|
return -1;
|
|
|
|
|
2014-05-25 02:38:29 +02:00
|
|
|
if (gc_before_repack())
|
|
|
|
return -1;
|
2014-02-08 08:08:52 +01:00
|
|
|
/*
|
|
|
|
* failure to daemonize is ok, we'll continue
|
|
|
|
* in foreground
|
|
|
|
*/
|
2015-09-19 07:13:23 +02:00
|
|
|
daemonized = !daemonize();
|
2014-05-25 02:38:29 +02:00
|
|
|
}
|
2008-05-10 06:01:56 +02:00
|
|
|
} else
|
2012-04-07 12:30:09 +02:00
|
|
|
add_repack_all_option();
|
2007-09-05 22:01:37 +02:00
|
|
|
|
2013-08-08 13:05:38 +02:00
|
|
|
name = lock_repo_for_gc(force, &pid);
|
|
|
|
if (name) {
|
|
|
|
if (auto_gc)
|
|
|
|
return 0; /* be quiet on --auto */
|
|
|
|
die(_("gc is already running on machine '%s' pid %"PRIuMAX" (use --force if not)"),
|
|
|
|
name, (uintmax_t)pid);
|
|
|
|
}
|
|
|
|
|
2015-09-19 07:13:23 +02:00
|
|
|
if (daemonized) {
|
|
|
|
hold_lock_file_for_update(&log_lock,
|
|
|
|
git_path("gc.log"),
|
|
|
|
LOCK_DIE_ON_ERROR);
|
2015-10-16 00:43:32 +02:00
|
|
|
dup2(get_lock_file_fd(&log_lock), 2);
|
2015-09-19 07:13:23 +02:00
|
|
|
sigchain_push_common(process_log_file_on_signal);
|
|
|
|
atexit(process_log_file_at_exit);
|
|
|
|
}
|
|
|
|
|
2014-05-25 02:38:29 +02:00
|
|
|
if (gc_before_repack())
|
|
|
|
return -1;
|
2007-03-14 02:58:22 +01:00
|
|
|
|
2015-06-23 12:54:11 +02:00
|
|
|
if (!repository_format_precious_objects) {
|
|
|
|
if (run_command_v_opt(repack.argv, RUN_GIT_CMD))
|
|
|
|
return error(FAILED_RUN, repack.argv[0]);
|
|
|
|
|
|
|
|
if (prune_expire) {
|
|
|
|
argv_array_push(&prune, prune_expire);
|
|
|
|
if (quiet)
|
|
|
|
argv_array_push(&prune, "--no-progress");
|
|
|
|
if (run_command_v_opt(prune.argv, RUN_GIT_CMD))
|
|
|
|
return error(FAILED_RUN, prune.argv[0]);
|
|
|
|
}
|
2009-02-14 23:10:10 +01:00
|
|
|
}
|
2007-03-14 02:58:22 +01:00
|
|
|
|
2014-11-30 09:24:53 +01:00
|
|
|
if (prune_worktrees_expire) {
|
|
|
|
argv_array_push(&prune_worktrees, prune_worktrees_expire);
|
|
|
|
if (run_command_v_opt(prune_worktrees.argv, RUN_GIT_CMD))
|
|
|
|
return error(FAILED_RUN, prune_worktrees.argv[0]);
|
|
|
|
}
|
|
|
|
|
2012-04-18 23:10:19 +02:00
|
|
|
if (run_command_v_opt(rerere.argv, RUN_GIT_CMD))
|
|
|
|
return error(FAILED_RUN, rerere.argv[0]);
|
2007-03-14 02:58:22 +01:00
|
|
|
|
2015-11-04 04:05:08 +01:00
|
|
|
report_garbage = report_pack_garbage;
|
|
|
|
reprepare_packed_git();
|
|
|
|
if (pack_garbage.nr > 0)
|
|
|
|
clean_pack_garbage();
|
|
|
|
|
2007-09-17 09:44:17 +02:00
|
|
|
if (auto_gc && too_many_loose_objects())
|
2011-02-23 00:42:24 +01:00
|
|
|
warning(_("There are too many unreachable loose objects; "
|
|
|
|
"run 'git prune' to remove them."));
|
2007-09-17 09:44:17 +02:00
|
|
|
|
gc: ignore old gc.log files
A server can end up in a state where there are lots of unreferenced
loose objects (say, because many users are doing a bunch of rebasing
and pushing their rebased branches). Running "git gc --auto" in
this state would cause a gc.log file to be created, preventing
future auto gcs, causing pack files to pile up. Since many git
operations are O(n) in the number of pack files, this would lead to
poor performance.
Git should never get itself into a state where it refuses to do any
maintenance, just because at some point some piece of the maintenance
didn't make progress.
Teach Git to ignore gc.log files which are older than (by default)
one day old, which can be tweaked via the gc.logExpiry configuration
variable. That way, these pack files will get cleaned up, if
necessary, at least once per day. And operators who find a need for
more-frequent gcs can adjust gc.logExpiry to meet their needs.
There is also some cleanup: a successful manual gc, or a
warning-free auto gc with an old log file, will remove any old
gc.log files.
It might still happen that manual intervention is required
(e.g. because the repo is corrupt), but at the very least it won't
be because Git is too dumb to try again.
Signed-off-by: David Turner <dturner@twosigma.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-10 22:28:22 +01:00
|
|
|
if (!daemonized)
|
|
|
|
unlink(git_path("gc.log"));
|
|
|
|
|
2007-03-14 02:58:22 +01:00
|
|
|
return 0;
|
|
|
|
}
|