2006-02-16 10:24:16 +01:00
|
|
|
#!/usr/bin/env perl
|
2006-02-20 19:57:29 +01:00
|
|
|
# Copyright (C) 2006, Eric Wong <normalperson@yhbt.net>
|
|
|
|
# License: GPL v2 or later
|
2006-02-16 10:24:16 +01:00
|
|
|
use warnings;
|
|
|
|
use strict;
|
|
|
|
use vars qw/ $AUTHOR $VERSION
|
2006-03-03 10:20:09 +01:00
|
|
|
$SVN_URL $SVN_INFO $SVN_WC $SVN_UUID
|
2006-02-16 10:24:16 +01:00
|
|
|
$GIT_SVN_INDEX $GIT_SVN
|
2006-06-13 13:02:23 +02:00
|
|
|
$GIT_DIR $GIT_SVN_DIR $REVDB/;
|
2006-02-16 10:24:16 +01:00
|
|
|
$AUTHOR = 'Eric Wong <normalperson@yhbt.net>';
|
2006-07-06 09:14:16 +02:00
|
|
|
$VERSION = '@@GIT_VERSION@@';
|
2006-03-30 08:37:18 +02:00
|
|
|
|
|
|
|
use Cwd qw/abs_path/;
|
|
|
|
$GIT_DIR = abs_path($ENV{GIT_DIR} || '.git');
|
|
|
|
$ENV{GIT_DIR} = $GIT_DIR;
|
|
|
|
|
2006-06-03 00:16:41 +02:00
|
|
|
my $LC_ALL = $ENV{LC_ALL};
|
2006-06-01 11:35:44 +02:00
|
|
|
my $TZ = $ENV{TZ};
|
2006-02-16 10:24:16 +01:00
|
|
|
# make sure the svn binary gives consistent output between locales and TZs:
|
|
|
|
$ENV{TZ} = 'UTC';
|
|
|
|
$ENV{LC_ALL} = 'C';
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 04:39:13 +02:00
|
|
|
$| = 1; # unbuffer STDOUT
|
2006-02-16 10:24:16 +01:00
|
|
|
|
2006-12-12 23:47:00 +01:00
|
|
|
# properties that we do not log:
|
|
|
|
my %SKIP = ( 'svn:wc:ra_dav:version-url' => 1,
|
|
|
|
'svn:special' => 1,
|
|
|
|
'svn:executable' => 1,
|
|
|
|
'svn:entry:committed-rev' => 1,
|
|
|
|
'svn:entry:last-author' => 1,
|
|
|
|
'svn:entry:uuid' => 1,
|
|
|
|
'svn:entry:committed-date' => 1,
|
|
|
|
);
|
|
|
|
|
2006-12-12 23:47:02 +01:00
|
|
|
sub fatal (@) { print STDERR @_; exit 1 }
|
2006-12-16 08:58:07 +01:00
|
|
|
require SVN::Core; # use()-ing this causes segfaults for me... *shrug*
|
|
|
|
require SVN::Ra;
|
|
|
|
require SVN::Delta;
|
|
|
|
if ($SVN::Core::VERSION lt '1.1.0') {
|
|
|
|
fatal "Need SVN::Core 1.1.0 or better (got $SVN::Core::VERSION)\n";
|
|
|
|
}
|
|
|
|
push @SVN::Git::Editor::ISA, 'SVN::Delta::Editor';
|
|
|
|
push @SVN::Git::Fetcher::ISA, 'SVN::Delta::Editor';
|
|
|
|
*SVN::Git::Fetcher::process_rm = *process_rm;
|
2006-02-16 10:24:16 +01:00
|
|
|
use Carp qw/croak/;
|
|
|
|
use IO::File qw//;
|
|
|
|
use File::Basename qw/dirname basename/;
|
|
|
|
use File::Path qw/mkpath/;
|
2006-06-01 11:35:44 +02:00
|
|
|
use Getopt::Long qw/:config gnu_getopt no_ignore_case auto_abbrev pass_through/;
|
2006-02-26 11:22:27 +01:00
|
|
|
use POSIX qw/strftime/;
|
2006-06-15 22:36:12 +02:00
|
|
|
use IPC::Open3;
|
2006-06-13 13:02:23 +02:00
|
|
|
use Memoize;
|
2006-12-15 19:59:54 +01:00
|
|
|
use Git qw/command command_oneline command_noisy
|
|
|
|
command_output_pipe command_input_pipe command_close_pipe/;
|
2006-06-13 13:02:23 +02:00
|
|
|
memoize('revisions_eq');
|
2006-06-28 04:39:11 +02:00
|
|
|
memoize('cmt_metadata');
|
|
|
|
memoize('get_commit_time');
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
|
2006-12-16 08:58:07 +01:00
|
|
|
my ($SVN);
|
2006-10-12 03:19:55 +02:00
|
|
|
|
2006-06-13 13:02:23 +02:00
|
|
|
my $_optimize_commits = 1 unless $ENV{GIT_SVN_NO_OPTIMIZE_COMMITS};
|
2006-02-16 10:24:16 +01:00
|
|
|
my $sha1 = qr/[a-f\d]{40}/;
|
2006-03-03 10:20:07 +01:00
|
|
|
my $sha1_short = qr/[a-f\d]{4,40}/;
|
2006-11-29 03:51:40 +01:00
|
|
|
my $_esc_color = qr/(?:\033\[(?:(?:\d+;)*\d*)?m)*/;
|
2006-02-20 19:57:26 +01:00
|
|
|
my ($_revision,$_stdin,$_no_ignore_ext,$_no_stop_copy,$_help,$_rmdir,$_edit,
|
2006-06-16 11:55:13 +02:00
|
|
|
$_find_copies_harder, $_l, $_cp_similarity, $_cp_remote,
|
2006-06-28 04:39:14 +02:00
|
|
|
$_repack, $_repack_nr, $_repack_flags, $_q,
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 04:39:13 +02:00
|
|
|
$_message, $_file, $_follow_parent, $_no_metadata,
|
2006-06-13 00:53:13 +02:00
|
|
|
$_template, $_shared, $_no_default_regex, $_no_graft_copy,
|
2006-06-01 11:35:44 +02:00
|
|
|
$_limit, $_verbose, $_incremental, $_oneline, $_l_fmt, $_show_commit,
|
2006-08-26 09:01:23 +02:00
|
|
|
$_version, $_upgrade, $_authors, $_branch_all_refs, @_opt_m,
|
2006-11-24 10:38:04 +01:00
|
|
|
$_merge, $_strategy, $_dry_run, $_ignore_nodate, $_non_recursive,
|
2006-11-29 03:51:40 +01:00
|
|
|
$_username, $_config_dir, $_no_auth_cache, $_xfer_delta,
|
|
|
|
$_pager, $_color);
|
2006-06-13 13:02:23 +02:00
|
|
|
my (@_branch_from, %tree_map, %users, %rusers, %equiv);
|
2006-12-16 08:58:07 +01:00
|
|
|
my ($_svn_can_do_switch);
|
2006-05-24 10:22:07 +02:00
|
|
|
my @repo_path_split_cache;
|
2006-02-16 10:24:16 +01:00
|
|
|
|
2006-03-03 10:20:08 +01:00
|
|
|
my %fc_opts = ( 'no-ignore-externals' => \$_no_ignore_ext,
|
2006-03-03 10:20:07 +01:00
|
|
|
'branch|b=s' => \@_branch_from,
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 04:39:13 +02:00
|
|
|
'follow-parent|follow' => \$_follow_parent,
|
2006-04-28 12:42:38 +02:00
|
|
|
'branch-all-refs|B' => \$_branch_all_refs,
|
2006-05-24 11:07:32 +02:00
|
|
|
'authors-file|A=s' => \$_authors,
|
|
|
|
'repack:i' => \$_repack,
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 04:39:13 +02:00
|
|
|
'no-metadata' => \$_no_metadata,
|
2006-06-28 04:39:14 +02:00
|
|
|
'quiet|q' => \$_q,
|
2006-11-24 10:38:04 +01:00
|
|
|
'username=s' => \$_username,
|
|
|
|
'config-dir=s' => \$_config_dir,
|
|
|
|
'no-auth-cache' => \$_no_auth_cache,
|
2006-09-25 04:50:15 +02:00
|
|
|
'ignore-nodate' => \$_ignore_nodate,
|
2006-05-24 11:07:32 +02:00
|
|
|
'repack-flags|repack-args|repack-opts=s' => \$_repack_flags);
|
2006-05-24 04:23:41 +02:00
|
|
|
|
2006-06-13 00:53:13 +02:00
|
|
|
my ($_trunk, $_tags, $_branches);
|
|
|
|
my %multi_opts = ( 'trunk|T=s' => \$_trunk,
|
|
|
|
'tags|t=s' => \$_tags,
|
|
|
|
'branches|b=s' => \$_branches );
|
|
|
|
my %init_opts = ( 'template=s' => \$_template, 'shared' => \$_shared );
|
2006-06-28 04:39:12 +02:00
|
|
|
my %cmt_opts = ( 'edit|e' => \$_edit,
|
|
|
|
'rmdir' => \$_rmdir,
|
|
|
|
'find-copies-harder' => \$_find_copies_harder,
|
|
|
|
'l=i' => \$_l,
|
|
|
|
'copy-similarity|C=i'=> \$_cp_similarity
|
|
|
|
);
|
2006-06-13 00:53:13 +02:00
|
|
|
|
2006-02-16 10:24:16 +01:00
|
|
|
my %cmd = (
|
2006-03-03 10:20:08 +01:00
|
|
|
fetch => [ \&fetch, "Download new revisions from SVN",
|
|
|
|
{ 'revision|r=s' => \$_revision, %fc_opts } ],
|
2006-05-05 21:35:39 +02:00
|
|
|
init => [ \&init, "Initialize a repo for tracking" .
|
2006-06-01 00:49:56 +02:00
|
|
|
" (requires URL argument)",
|
2006-06-13 00:53:13 +02:00
|
|
|
\%init_opts ],
|
2006-12-16 08:58:08 +01:00
|
|
|
dcommit => [ \&dcommit, 'Commit several diffs to merge with upstream',
|
|
|
|
{ 'merge|m|M' => \$_merge,
|
|
|
|
'strategy|s=s' => \$_strategy,
|
|
|
|
'dry-run|n' => \$_dry_run,
|
2006-12-23 06:59:24 +01:00
|
|
|
%cmt_opts, %fc_opts } ],
|
2006-12-16 08:58:08 +01:00
|
|
|
'set-tree' => [ \&commit, "Set an SVN repository to a git tree-ish",
|
2006-06-28 04:39:12 +02:00
|
|
|
{ 'stdin|' => \$_stdin, %cmt_opts, %fc_opts, } ],
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
'show-ignore' => [ \&show_ignore, "Show svn:ignore listings",
|
|
|
|
{ 'revision|r=i' => \$_revision } ],
|
2006-03-03 10:20:08 +01:00
|
|
|
rebuild => [ \&rebuild, "Rebuild git-svn metadata (after git clone)",
|
|
|
|
{ 'no-ignore-externals' => \$_no_ignore_ext,
|
2006-06-16 11:55:13 +02:00
|
|
|
'copy-remote|remote=s' => \$_cp_remote,
|
2006-03-03 10:20:08 +01:00
|
|
|
'upgrade' => \$_upgrade } ],
|
2006-06-13 00:53:13 +02:00
|
|
|
'graft-branches' => [ \&graft_branches,
|
|
|
|
'Detect merges/branches from already imported history',
|
|
|
|
{ 'merge-rx|m' => \@_opt_m,
|
2006-06-28 04:39:11 +02:00
|
|
|
'branch|b=s' => \@_branch_from,
|
|
|
|
'branch-all-refs|B' => \$_branch_all_refs,
|
2006-06-13 00:53:13 +02:00
|
|
|
'no-default-regex' => \$_no_default_regex,
|
|
|
|
'no-graft-copy' => \$_no_graft_copy } ],
|
|
|
|
'multi-init' => [ \&multi_init,
|
|
|
|
'Initialize multiple trees (like git-svnimport)',
|
2006-11-29 03:51:42 +01:00
|
|
|
{ %multi_opts, %init_opts,
|
|
|
|
'revision|r=i' => \$_revision,
|
|
|
|
'username=s' => \$_username,
|
|
|
|
'config-dir=s' => \$_config_dir,
|
|
|
|
'no-auth-cache' => \$_no_auth_cache,
|
|
|
|
} ],
|
2006-06-13 00:53:13 +02:00
|
|
|
'multi-fetch' => [ \&multi_fetch,
|
|
|
|
'Fetch multiple trees (like git-svnimport)',
|
|
|
|
\%fc_opts ],
|
2006-06-01 11:35:44 +02:00
|
|
|
'log' => [ \&show_log, 'Show commit logs',
|
|
|
|
{ 'limit=i' => \$_limit,
|
|
|
|
'revision|r=s' => \$_revision,
|
|
|
|
'verbose|v' => \$_verbose,
|
|
|
|
'incremental' => \$_incremental,
|
|
|
|
'oneline' => \$_oneline,
|
|
|
|
'show-commit' => \$_show_commit,
|
2006-10-11 20:53:22 +02:00
|
|
|
'non-recursive' => \$_non_recursive,
|
2006-06-01 11:35:44 +02:00
|
|
|
'authors-file|A=s' => \$_authors,
|
2006-11-29 03:51:40 +01:00
|
|
|
'color' => \$_color,
|
|
|
|
'pager=s' => \$_pager,
|
2006-06-01 11:35:44 +02:00
|
|
|
} ],
|
2006-06-28 04:39:12 +02:00
|
|
|
'commit-diff' => [ \&commit_diff, 'Commit a diff between two trees',
|
|
|
|
{ 'message|m=s' => \$_message,
|
|
|
|
'file|F=s' => \$_file,
|
2006-11-09 10:19:37 +01:00
|
|
|
'revision|r=s' => \$_revision,
|
2006-06-28 04:39:12 +02:00
|
|
|
%cmt_opts } ],
|
2006-02-16 10:24:16 +01:00
|
|
|
);
|
2006-06-13 00:53:13 +02:00
|
|
|
|
2006-02-16 10:24:16 +01:00
|
|
|
my $cmd;
|
|
|
|
for (my $i = 0; $i < @ARGV; $i++) {
|
|
|
|
if (defined $cmd{$ARGV[$i]}) {
|
|
|
|
$cmd = $ARGV[$i];
|
|
|
|
splice @ARGV, $i, 1;
|
|
|
|
last;
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2006-03-03 10:20:09 +01:00
|
|
|
my %opts = %{$cmd{$cmd}->[2]} if (defined $cmd);
|
2006-03-03 10:20:08 +01:00
|
|
|
|
2006-05-24 10:40:37 +02:00
|
|
|
read_repo_config(\%opts);
|
2006-06-01 11:35:44 +02:00
|
|
|
my $rv = GetOptions(%opts, 'help|H|h' => \$_help,
|
|
|
|
'version|V' => \$_version,
|
|
|
|
'id|i=s' => \$GIT_SVN);
|
|
|
|
exit 1 if (!$rv && $cmd ne 'log');
|
2006-03-03 10:20:09 +01:00
|
|
|
|
2006-05-24 11:07:32 +02:00
|
|
|
set_default_vals();
|
2006-02-16 10:24:16 +01:00
|
|
|
usage(0) if $_help;
|
2006-02-20 19:57:29 +01:00
|
|
|
version() if $_version;
|
2006-03-03 10:20:08 +01:00
|
|
|
usage(1) unless defined $cmd;
|
2006-05-24 10:40:37 +02:00
|
|
|
init_vars();
|
2006-03-03 10:20:08 +01:00
|
|
|
load_authors() if $_authors;
|
2006-04-28 12:42:38 +02:00
|
|
|
load_all_refs() if $_branch_all_refs;
|
2006-07-15 16:10:56 +02:00
|
|
|
migration_check() unless $cmd =~ /^(?:init|rebuild|multi-init|commit-diff)$/;
|
2006-02-16 10:24:16 +01:00
|
|
|
$cmd{$cmd}->[0]->(@ARGV);
|
|
|
|
exit 0;
|
|
|
|
|
|
|
|
####################### primary functions ######################
|
|
|
|
sub usage {
|
|
|
|
my $exit = shift || 0;
|
|
|
|
my $fd = $exit ? \*STDERR : \*STDOUT;
|
|
|
|
print $fd <<"";
|
|
|
|
git-svn - bidirectional operations between a single Subversion tree and git
|
|
|
|
Usage: $0 <command> [options] [arguments]\n
|
2006-03-03 10:20:09 +01:00
|
|
|
|
|
|
|
print $fd "Available commands:\n" unless $cmd;
|
2006-02-16 10:24:16 +01:00
|
|
|
|
|
|
|
foreach (sort keys %cmd) {
|
2006-03-03 10:20:09 +01:00
|
|
|
next if $cmd && $cmd ne $_;
|
2006-10-11 23:53:36 +02:00
|
|
|
print $fd ' ',pack('A17',$_),$cmd{$_}->[1],"\n";
|
2006-03-03 10:20:09 +01:00
|
|
|
foreach (keys %{$cmd{$_}->[2]}) {
|
|
|
|
# prints out arguments as they should be passed:
|
2006-05-24 10:40:37 +02:00
|
|
|
my $x = s#[:=]s$## ? '<arg>' : s#[:=]i$## ? '<num>' : '';
|
2006-10-11 23:53:36 +02:00
|
|
|
print $fd ' ' x 21, join(', ', map { length $_ > 1 ?
|
2006-03-03 10:20:09 +01:00
|
|
|
"--$_" : "-$_" }
|
|
|
|
split /\|/,$_)," $x\n";
|
|
|
|
}
|
2006-02-16 10:24:16 +01:00
|
|
|
}
|
|
|
|
print $fd <<"";
|
2006-03-03 10:20:09 +01:00
|
|
|
\nGIT_SVN_ID may be set in the environment or via the --id/-i switch to an
|
|
|
|
arbitrary identifier if you're tracking multiple SVN branches/repositories in
|
|
|
|
one git repository and want to keep them separate. See git-svn(1) for more
|
|
|
|
information.
|
2006-02-16 10:24:16 +01:00
|
|
|
|
|
|
|
exit $exit;
|
|
|
|
}
|
|
|
|
|
2006-02-20 19:57:29 +01:00
|
|
|
sub version {
|
|
|
|
print "git-svn version $VERSION\n";
|
|
|
|
exit 0;
|
|
|
|
}
|
|
|
|
|
2006-02-16 10:24:16 +01:00
|
|
|
sub rebuild {
|
2006-12-15 19:59:54 +01:00
|
|
|
if (!verify_ref("refs/remotes/$GIT_SVN^0")) {
|
2006-06-16 11:55:13 +02:00
|
|
|
copy_remote_ref();
|
|
|
|
}
|
2006-02-16 10:24:16 +01:00
|
|
|
$SVN_URL = shift or undef;
|
|
|
|
my $newest_rev = 0;
|
2006-03-02 06:58:31 +01:00
|
|
|
if ($_upgrade) {
|
2006-12-15 19:59:54 +01:00
|
|
|
command_noisy('update-ref',"refs/remotes/$GIT_SVN","
|
|
|
|
$GIT_SVN-HEAD");
|
2006-03-02 06:58:31 +01:00
|
|
|
} else {
|
|
|
|
check_upgrade_needed();
|
|
|
|
}
|
2006-02-16 10:24:16 +01:00
|
|
|
|
2006-12-15 19:59:54 +01:00
|
|
|
my ($rev_list, $ctx) = command_output_pipe("rev-list",
|
|
|
|
"refs/remotes/$GIT_SVN");
|
2006-03-02 06:58:31 +01:00
|
|
|
my $latest;
|
2006-02-16 10:24:16 +01:00
|
|
|
while (<$rev_list>) {
|
|
|
|
chomp;
|
|
|
|
my $c = $_;
|
|
|
|
croak "Non-SHA1: $c\n" unless $c =~ /^$sha1$/o;
|
2006-12-15 19:59:54 +01:00
|
|
|
my @commit = grep(/^git-svn-id: /,
|
|
|
|
command(qw/cat-file commit/, $c));
|
2006-02-16 10:24:16 +01:00
|
|
|
next if (!@commit); # skip merges
|
2006-06-01 11:35:44 +02:00
|
|
|
my ($url, $rev, $uuid) = extract_metadata($commit[$#commit]);
|
2006-11-23 23:54:04 +01:00
|
|
|
if (!defined $rev || !$uuid) {
|
2006-06-01 11:35:44 +02:00
|
|
|
croak "Unable to extract revision or UUID from ",
|
|
|
|
"$c, $commit[$#commit]\n";
|
2006-02-16 10:24:16 +01:00
|
|
|
}
|
2006-03-02 06:58:31 +01:00
|
|
|
|
|
|
|
# if we merged or otherwise started elsewhere, this is
|
|
|
|
# how we break out of it
|
2006-03-03 10:20:09 +01:00
|
|
|
next if (defined $SVN_UUID && ($uuid ne $SVN_UUID));
|
2006-03-09 12:52:48 +01:00
|
|
|
next if (defined $SVN_URL && defined $url && ($url ne $SVN_URL));
|
2006-03-02 06:58:31 +01:00
|
|
|
|
|
|
|
unless (defined $latest) {
|
2006-02-16 10:24:16 +01:00
|
|
|
if (!$SVN_URL && !$url) {
|
|
|
|
croak "SVN repository location required: $url\n";
|
|
|
|
}
|
|
|
|
$SVN_URL ||= $url;
|
2006-03-09 12:48:47 +01:00
|
|
|
$SVN_UUID ||= $uuid;
|
|
|
|
setup_git_svn();
|
2006-03-02 06:58:31 +01:00
|
|
|
$latest = $rev;
|
2006-02-16 10:24:16 +01:00
|
|
|
}
|
2006-06-13 13:02:23 +02:00
|
|
|
revdb_set($REVDB, $rev, $c);
|
|
|
|
print "r$rev = $c\n";
|
2006-02-16 10:24:16 +01:00
|
|
|
$newest_rev = $rev if ($rev > $newest_rev);
|
|
|
|
}
|
2006-12-15 19:59:54 +01:00
|
|
|
command_close_pipe($rev_list, $ctx);
|
2006-02-16 10:24:16 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
sub init {
|
2006-07-01 06:42:53 +02:00
|
|
|
my $url = shift or die "SVN repository location required " .
|
2006-05-05 21:35:39 +02:00
|
|
|
"as a command-line argument\n";
|
2006-07-01 06:42:53 +02:00
|
|
|
$url =~ s!/+$!!; # strip trailing slash
|
|
|
|
|
|
|
|
if (my $repo_path = shift) {
|
|
|
|
unless (-d $repo_path) {
|
|
|
|
mkpath([$repo_path]);
|
|
|
|
}
|
|
|
|
$GIT_DIR = $ENV{GIT_DIR} = $repo_path . "/.git";
|
|
|
|
init_vars();
|
|
|
|
}
|
|
|
|
|
|
|
|
$SVN_URL = $url;
|
2006-02-16 10:24:16 +01:00
|
|
|
unless (-d $GIT_DIR) {
|
2006-12-15 19:59:54 +01:00
|
|
|
my @init_db = ('init-db');
|
2006-06-01 00:49:56 +02:00
|
|
|
push @init_db, "--template=$_template" if defined $_template;
|
|
|
|
push @init_db, "--shared" if defined $_shared;
|
2006-12-15 19:59:54 +01:00
|
|
|
command_noisy(@init_db);
|
2006-02-16 10:24:16 +01:00
|
|
|
}
|
|
|
|
setup_git_svn();
|
|
|
|
}
|
|
|
|
|
|
|
|
sub fetch {
|
2006-03-02 06:58:31 +01:00
|
|
|
check_upgrade_needed();
|
2006-05-24 10:22:07 +02:00
|
|
|
$SVN_URL ||= file_to_s("$GIT_SVN_DIR/info/url");
|
2006-12-16 08:58:07 +01:00
|
|
|
my $ret = fetch_lib(@_);
|
2006-12-15 19:59:54 +01:00
|
|
|
if ($ret->{commit} && !verify_ref('refs/heads/master^0')) {
|
|
|
|
command_noisy(qw(update-ref refs/heads/master),$ret->{commit});
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
}
|
|
|
|
return $ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub fetch_lib {
|
|
|
|
my (@parents) = @_;
|
|
|
|
$SVN_URL ||= file_to_s("$GIT_SVN_DIR/info/url");
|
2006-11-25 07:38:17 +01:00
|
|
|
$SVN ||= libsvn_connect($SVN_URL);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
my ($last_rev, $last_commit) = svn_grab_base_rev();
|
|
|
|
my ($base, $head) = libsvn_parse_revision($last_rev);
|
|
|
|
if ($base > $head) {
|
|
|
|
return { revision => $last_rev, commit => $last_commit }
|
|
|
|
}
|
|
|
|
my $index = set_index($GIT_SVN_INDEX);
|
|
|
|
|
|
|
|
# limit ourselves and also fork() since get_log won't release memory
|
|
|
|
# after processing a revision and SVN stuff seems to leak
|
|
|
|
my $inc = 1000;
|
|
|
|
my ($min, $max) = ($base, $head < $base+$inc ? $head : $base+$inc);
|
|
|
|
read_uuid();
|
|
|
|
if (defined $last_commit) {
|
|
|
|
unless (-e $GIT_SVN_INDEX) {
|
2006-12-15 19:59:54 +01:00
|
|
|
command_noisy('read-tree', $last_commit);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
}
|
2006-12-15 19:59:54 +01:00
|
|
|
my $x = command_oneline('write-tree');
|
|
|
|
my ($y) = (command(qw/cat-file commit/, $last_commit)
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
=~ /^tree ($sha1)/m);
|
|
|
|
if ($y ne $x) {
|
|
|
|
unlink $GIT_SVN_INDEX or croak $!;
|
2006-12-15 19:59:54 +01:00
|
|
|
command_noisy('read-tree', $last_commit);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
}
|
2006-12-15 19:59:54 +01:00
|
|
|
$x = command_oneline('write-tree');
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
if ($y ne $x) {
|
|
|
|
print STDERR "trees ($last_commit) $y != $x\n",
|
|
|
|
"Something is seriously wrong...\n";
|
|
|
|
}
|
|
|
|
}
|
|
|
|
while (1) {
|
|
|
|
# fork, because using SVN::Pool with get_log() still doesn't
|
|
|
|
# seem to help enough to keep memory usage down.
|
|
|
|
defined(my $pid = fork) or croak $!;
|
|
|
|
if (!$pid) {
|
|
|
|
$SVN::Error::handler = \&libsvn_skip_unknown_revs;
|
|
|
|
|
|
|
|
# Yes I'm perfectly aware that the fourth argument
|
|
|
|
# below is the limit revisions number. Unfortunately
|
|
|
|
# performance sucks with it enabled, so it's much
|
|
|
|
# faster to fetch revision ranges instead of relying
|
|
|
|
# on the limiter.
|
2006-11-25 07:38:17 +01:00
|
|
|
libsvn_get_log(libsvn_dup_ra($SVN), [''],
|
git-svn: SVN 1.1.x library compatibility
Tested on a plain Ubuntu Hoary installation
using subversion 1.1.1-2ubuntu3
1.1.x issues I had to deal with:
* Avoid the noisy command-line client compatibility check if we
use the libraries.
* get_log() arguments differ (now using a nice wrapper from
Junio's suggestion)
* get_file() is picky about what kind of file handles it gets,
so I ended up redirecting STDOUT. I'm probably overflushing
my file handles, but that's the safest thing to do...
* BDB kept segfaulting on me during tests, so svnadmin will use FSFS
whenever we can.
* If somebody used an expanded CVS $Id$ line inside a file, then
propsetting it to use svn:keywords will cause the original CVS
$Id$ to be retained when asked for the original file. As far as
I can see, this is a server-side issue. We won't care in the
test anymore, as long as it's not expanded by SVN, a static
CVS $Id$ line is fine.
While we're at making ourselves more compatible, avoid grep
along with the -q flag, which is GNU-specific. (grep avoidance
tip from Junio, too)
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 12:07:14 +02:00
|
|
|
$min, $max, 0, 1, 1,
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
sub {
|
|
|
|
my $log_msg;
|
|
|
|
if ($last_commit) {
|
|
|
|
$log_msg = libsvn_fetch(
|
|
|
|
$last_commit, @_);
|
|
|
|
$last_commit = git_commit(
|
|
|
|
$log_msg,
|
|
|
|
$last_commit,
|
|
|
|
@parents);
|
|
|
|
} else {
|
|
|
|
$log_msg = libsvn_new_tree(@_);
|
|
|
|
$last_commit = git_commit(
|
|
|
|
$log_msg, @parents);
|
|
|
|
}
|
|
|
|
});
|
|
|
|
exit 0;
|
|
|
|
}
|
|
|
|
waitpid $pid, 0;
|
|
|
|
croak $? if $?;
|
|
|
|
($last_rev, $last_commit) = svn_grab_base_rev();
|
|
|
|
last if ($max >= $head);
|
|
|
|
$min = $max + 1;
|
|
|
|
$max += $inc;
|
|
|
|
$max = $head if ($max > $head);
|
2006-12-04 09:51:16 +01:00
|
|
|
$SVN = libsvn_connect($SVN_URL);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
}
|
|
|
|
restore_index($index);
|
|
|
|
return { revision => $last_rev, commit => $last_commit };
|
|
|
|
}
|
|
|
|
|
2006-02-16 10:24:16 +01:00
|
|
|
sub commit {
|
|
|
|
my (@commits) = @_;
|
2006-03-02 06:58:31 +01:00
|
|
|
check_upgrade_needed();
|
2006-02-16 10:24:16 +01:00
|
|
|
if ($_stdin || !@commits) {
|
|
|
|
print "Reading from stdin...\n";
|
|
|
|
@commits = ();
|
|
|
|
while (<STDIN>) {
|
2006-03-03 10:20:09 +01:00
|
|
|
if (/\b($sha1_short)\b/o) {
|
2006-02-16 10:24:16 +01:00
|
|
|
unshift @commits, $1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
my @revs;
|
2006-02-20 19:57:26 +01:00
|
|
|
foreach my $c (@commits) {
|
2006-12-15 19:59:54 +01:00
|
|
|
my @tmp = command('rev-parse',$c);
|
2006-02-20 19:57:26 +01:00
|
|
|
if (scalar @tmp == 1) {
|
|
|
|
push @revs, $tmp[0];
|
|
|
|
} elsif (scalar @tmp > 1) {
|
2006-12-15 19:59:54 +01:00
|
|
|
push @revs, reverse(command('rev-list',@tmp));
|
2006-02-20 19:57:26 +01:00
|
|
|
} else {
|
|
|
|
die "Failed to rev-parse $c\n";
|
|
|
|
}
|
2006-02-16 10:24:16 +01:00
|
|
|
}
|
2006-12-16 08:58:07 +01:00
|
|
|
commit_lib(@revs);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
print "Done committing ",scalar @revs," revisions to SVN\n";
|
|
|
|
}
|
|
|
|
|
|
|
|
sub commit_lib {
|
|
|
|
my (@revs) = @_;
|
|
|
|
my ($r_last, $cmt_last) = svn_grab_base_rev();
|
|
|
|
defined $r_last or die "Must have an existing revision to commit\n";
|
2006-06-15 21:50:12 +02:00
|
|
|
my $fetched = fetch();
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
if ($r_last != $fetched->{revision}) {
|
|
|
|
print STDERR "There are new revisions that were fetched ",
|
|
|
|
"and need to be merged (or acknowledged) ",
|
|
|
|
"before committing.\n",
|
|
|
|
"last rev: $r_last\n",
|
|
|
|
" current: $fetched->{revision}\n";
|
|
|
|
exit 1;
|
|
|
|
}
|
|
|
|
read_uuid();
|
|
|
|
my @lock = $SVN::Core::VERSION ge '1.2.0' ? (undef, 0) : ();
|
|
|
|
my $commit_msg = "$GIT_SVN_DIR/.svn-commit.tmp.$$";
|
|
|
|
|
2006-08-25 21:28:18 +02:00
|
|
|
my $repo;
|
2006-06-28 04:39:12 +02:00
|
|
|
set_svn_commit_env();
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
foreach my $c (@revs) {
|
2006-06-22 10:22:46 +02:00
|
|
|
my $log_msg = get_commit_message($c, $commit_msg);
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
# fork for each commit because there's a memory leak I
|
|
|
|
# can't track down... (it's probably in the SVN code)
|
|
|
|
defined(my $pid = open my $fh, '-|') or croak $!;
|
|
|
|
if (!$pid) {
|
|
|
|
my $ed = SVN::Git::Editor->new(
|
|
|
|
{ r => $r_last,
|
2006-11-25 07:38:17 +01:00
|
|
|
ra => libsvn_dup_ra($SVN),
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
c => $c,
|
2006-11-25 07:38:17 +01:00
|
|
|
svn_path => $SVN->{svn_path},
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
},
|
|
|
|
$SVN->get_commit_editor(
|
|
|
|
$log_msg->{msg},
|
|
|
|
sub {
|
|
|
|
libsvn_commit_cb(
|
|
|
|
@_, $c,
|
|
|
|
$log_msg->{msg},
|
|
|
|
$r_last,
|
|
|
|
$cmt_last)
|
|
|
|
},
|
|
|
|
@lock)
|
|
|
|
);
|
2006-06-13 13:02:23 +02:00
|
|
|
my $mods = libsvn_checkout_tree($cmt_last, $c, $ed);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
if (@$mods == 0) {
|
|
|
|
print "No changes\nr$r_last = $cmt_last\n";
|
|
|
|
$ed->abort_edit;
|
|
|
|
} else {
|
|
|
|
$ed->close_edit;
|
|
|
|
}
|
|
|
|
exit 0;
|
|
|
|
}
|
|
|
|
my ($r_new, $cmt_new, $no);
|
|
|
|
while (<$fh>) {
|
|
|
|
print $_;
|
|
|
|
chomp;
|
|
|
|
if (/^r(\d+) = ($sha1)$/o) {
|
|
|
|
($r_new, $cmt_new) = ($1, $2);
|
|
|
|
} elsif ($_ eq 'No changes') {
|
|
|
|
$no = 1;
|
|
|
|
}
|
|
|
|
}
|
2006-11-25 07:38:18 +01:00
|
|
|
close $fh or exit 1;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
if (! defined $r_new && ! defined $cmt_new) {
|
|
|
|
unless ($no) {
|
|
|
|
die "Failed to parse revision information\n";
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
($r_last, $cmt_last) = ($r_new, $cmt_new);
|
|
|
|
}
|
|
|
|
}
|
2006-06-22 10:22:46 +02:00
|
|
|
$ENV{LC_ALL} = 'C';
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
unlink $commit_msg;
|
|
|
|
}
|
2006-02-26 11:22:27 +01:00
|
|
|
|
2006-08-26 09:01:23 +02:00
|
|
|
sub dcommit {
|
2006-12-12 23:47:01 +01:00
|
|
|
my $head = shift || 'HEAD';
|
2006-08-26 09:01:23 +02:00
|
|
|
my $gs = "refs/remotes/$GIT_SVN";
|
2006-12-15 19:59:54 +01:00
|
|
|
my @refs = command(qw/rev-list --no-merges/, "$gs..$head");
|
2006-11-09 10:19:37 +01:00
|
|
|
my $last_rev;
|
2006-08-26 09:01:23 +02:00
|
|
|
foreach my $d (reverse @refs) {
|
2006-12-15 19:59:54 +01:00
|
|
|
if (!verify_ref("$d~1")) {
|
2006-11-23 23:54:03 +01:00
|
|
|
die "Commit $d\n",
|
|
|
|
"has no parent commit, and therefore ",
|
|
|
|
"nothing to diff against.\n",
|
|
|
|
"You should be working from a repository ",
|
|
|
|
"originally created by git-svn\n";
|
|
|
|
}
|
2006-11-09 10:19:37 +01:00
|
|
|
unless (defined $last_rev) {
|
|
|
|
(undef, $last_rev, undef) = cmt_metadata("$d~1");
|
|
|
|
unless (defined $last_rev) {
|
|
|
|
die "Unable to extract revision information ",
|
|
|
|
"from commit $d~1\n";
|
|
|
|
}
|
|
|
|
}
|
2006-08-26 09:01:23 +02:00
|
|
|
if ($_dry_run) {
|
|
|
|
print "diff-tree $d~1 $d\n";
|
|
|
|
} else {
|
2006-11-09 10:19:37 +01:00
|
|
|
if (my $r = commit_diff("$d~1", $d, undef, $last_rev)) {
|
|
|
|
$last_rev = $r;
|
|
|
|
} # else: no changes, same $last_rev
|
2006-08-26 09:01:23 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
return if $_dry_run;
|
|
|
|
fetch();
|
2006-12-15 19:59:54 +01:00
|
|
|
my @diff = command('diff-tree', $head, $gs, '--');
|
2006-08-26 09:01:23 +02:00
|
|
|
my @finish;
|
|
|
|
if (@diff) {
|
|
|
|
@finish = qw/rebase/;
|
|
|
|
push @finish, qw/--merge/ if $_merge;
|
|
|
|
push @finish, "--strategy=$_strategy" if $_strategy;
|
2006-12-12 23:47:01 +01:00
|
|
|
print STDERR "W: $head and $gs differ, using @finish:\n", @diff;
|
2006-08-26 09:01:23 +02:00
|
|
|
} else {
|
2006-12-12 23:47:01 +01:00
|
|
|
print "No changes between current $head and $gs\n",
|
|
|
|
"Resetting to the latest $gs\n";
|
2006-11-23 23:54:05 +01:00
|
|
|
@finish = qw/reset --mixed/;
|
2006-08-26 09:01:23 +02:00
|
|
|
}
|
2006-12-15 19:59:54 +01:00
|
|
|
command_noisy(@finish, $gs);
|
2006-08-26 09:01:23 +02:00
|
|
|
}
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
sub show_ignore {
|
2006-05-24 10:22:07 +02:00
|
|
|
$SVN_URL ||= file_to_s("$GIT_SVN_DIR/info/url");
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
my $repo;
|
2006-11-25 07:38:17 +01:00
|
|
|
$SVN ||= libsvn_connect($SVN_URL);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
my $r = defined $_revision ? $_revision : $SVN->get_latest_revnum;
|
2006-11-25 07:38:17 +01:00
|
|
|
libsvn_traverse_ignore(\*STDOUT, $SVN->{svn_path}, $r);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
}
|
|
|
|
|
2006-06-13 00:53:13 +02:00
|
|
|
sub graft_branches {
|
|
|
|
my $gr_file = "$GIT_DIR/info/grafts";
|
|
|
|
my ($grafts, $comments) = read_grafts($gr_file);
|
|
|
|
my $gr_sha1;
|
|
|
|
|
|
|
|
if (%$grafts) {
|
|
|
|
# temporarily disable our grafts file to make this idempotent
|
2006-12-15 19:59:54 +01:00
|
|
|
chomp($gr_sha1 = command(qw/hash-object -w/,$gr_file));
|
2006-06-13 00:53:13 +02:00
|
|
|
rename $gr_file, "$gr_file~$gr_sha1" or croak $!;
|
|
|
|
}
|
|
|
|
|
|
|
|
my $l_map = read_url_paths();
|
|
|
|
my @re = map { qr/$_/is } @_opt_m if @_opt_m;
|
|
|
|
unless ($_no_default_regex) {
|
2006-06-28 04:39:11 +02:00
|
|
|
push @re, (qr/\b(?:merge|merging|merged)\s+with\s+([\w\.\-]+)/i,
|
|
|
|
qr/\b(?:merge|merging|merged)\s+([\w\.\-]+)/i,
|
|
|
|
qr/\b(?:from|of)\s+([\w\.\-]+)/i );
|
2006-06-13 00:53:13 +02:00
|
|
|
}
|
|
|
|
foreach my $u (keys %$l_map) {
|
|
|
|
if (@re) {
|
|
|
|
foreach my $p (keys %{$l_map->{$u}}) {
|
2006-06-28 04:39:11 +02:00
|
|
|
graft_merge_msg($grafts,$l_map,$u,$p,@re);
|
2006-06-13 00:53:13 +02:00
|
|
|
}
|
|
|
|
}
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
unless ($_no_graft_copy) {
|
2006-12-16 08:58:07 +01:00
|
|
|
graft_file_copy_lib($grafts,$l_map,$u);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
}
|
2006-06-13 00:53:13 +02:00
|
|
|
}
|
2006-06-28 04:39:11 +02:00
|
|
|
graft_tree_joins($grafts);
|
2006-06-13 00:53:13 +02:00
|
|
|
|
|
|
|
write_grafts($grafts, $comments, $gr_file);
|
|
|
|
unlink "$gr_file~$gr_sha1" if $gr_sha1;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub multi_init {
|
|
|
|
my $url = shift;
|
|
|
|
$_trunk ||= 'trunk';
|
|
|
|
$_trunk =~ s#/+$##;
|
|
|
|
$url =~ s#/+$## if $url;
|
|
|
|
if ($_trunk !~ m#^[a-z\+]+://#) {
|
|
|
|
$_trunk = '/' . $_trunk if ($_trunk !~ m#^/#);
|
|
|
|
unless ($url) {
|
|
|
|
print STDERR "E: '$_trunk' is not a complete URL ",
|
|
|
|
"and a separate URL is not specified\n";
|
|
|
|
exit 1;
|
|
|
|
}
|
|
|
|
$_trunk = $url . $_trunk;
|
|
|
|
}
|
2006-10-11 20:53:21 +02:00
|
|
|
my $ch_id;
|
2006-06-13 00:53:13 +02:00
|
|
|
if ($GIT_SVN eq 'git-svn') {
|
2006-10-11 20:53:21 +02:00
|
|
|
$ch_id = 1;
|
2006-06-13 00:53:13 +02:00
|
|
|
$GIT_SVN = $ENV{GIT_SVN_ID} = 'trunk';
|
|
|
|
}
|
|
|
|
init_vars();
|
2006-10-11 20:53:21 +02:00
|
|
|
unless (-d $GIT_SVN_DIR) {
|
|
|
|
print "GIT_SVN_ID set to 'trunk' for $_trunk\n" if $ch_id;
|
|
|
|
init($_trunk);
|
2006-12-15 19:59:54 +01:00
|
|
|
command_noisy('repo-config', 'svn.trunk', $_trunk);
|
2006-10-11 20:53:21 +02:00
|
|
|
}
|
2006-06-13 00:53:13 +02:00
|
|
|
complete_url_ls_init($url, $_branches, '--branches/-b', '');
|
|
|
|
complete_url_ls_init($url, $_tags, '--tags/-t', 'tags/');
|
|
|
|
}
|
|
|
|
|
|
|
|
sub multi_fetch {
|
|
|
|
# try to do trunk first, since branches/tags
|
|
|
|
# may be descended from it.
|
2006-06-15 21:50:12 +02:00
|
|
|
if (-e "$GIT_DIR/svn/trunk/info/url") {
|
|
|
|
fetch_child_id('trunk', @_);
|
2006-06-13 00:53:13 +02:00
|
|
|
}
|
|
|
|
rec_fetch('', "$GIT_DIR/svn", @_);
|
|
|
|
}
|
|
|
|
|
2006-06-01 11:35:44 +02:00
|
|
|
sub show_log {
|
|
|
|
my (@args) = @_;
|
|
|
|
my ($r_min, $r_max);
|
|
|
|
my $r_last = -1; # prevent dupes
|
|
|
|
rload_authors() if $_authors;
|
|
|
|
if (defined $TZ) {
|
|
|
|
$ENV{TZ} = $TZ;
|
|
|
|
} else {
|
|
|
|
delete $ENV{TZ};
|
|
|
|
}
|
|
|
|
if (defined $_revision) {
|
|
|
|
if ($_revision =~ /^(\d+):(\d+)$/) {
|
|
|
|
($r_min, $r_max) = ($1, $2);
|
|
|
|
} elsif ($_revision =~ /^\d+$/) {
|
|
|
|
$r_min = $r_max = $_revision;
|
|
|
|
} else {
|
|
|
|
print STDERR "-r$_revision is not supported, use ",
|
|
|
|
"standard \'git log\' arguments instead\n";
|
|
|
|
exit 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-11-29 03:51:40 +01:00
|
|
|
config_pager();
|
2006-12-15 19:59:54 +01:00
|
|
|
@args = (git_svn_log_cmd($r_min, $r_max), @args);
|
|
|
|
my $log = command_output_pipe(@args);
|
2006-11-29 03:51:40 +01:00
|
|
|
run_pager();
|
2006-06-01 11:35:44 +02:00
|
|
|
my (@k, $c, $d);
|
2006-06-16 03:48:22 +02:00
|
|
|
|
2006-06-01 11:35:44 +02:00
|
|
|
while (<$log>) {
|
2006-11-29 03:51:40 +01:00
|
|
|
if (/^${_esc_color}commit ($sha1_short)/o) {
|
2006-06-01 11:35:44 +02:00
|
|
|
my $cmt = $1;
|
2006-06-16 03:48:22 +02:00
|
|
|
if ($c && cmt_showable($c) && $c->{r} != $r_last) {
|
2006-06-01 11:35:44 +02:00
|
|
|
$r_last = $c->{r};
|
|
|
|
process_commit($c, $r_min, $r_max, \@k) or
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
$d = undef;
|
|
|
|
$c = { c => $cmt };
|
2006-11-29 03:51:40 +01:00
|
|
|
} elsif (/^${_esc_color}author (.+) (\d+) ([\-\+]?\d+)$/) {
|
2006-06-01 11:35:44 +02:00
|
|
|
get_author_info($c, $1, $2, $3);
|
2006-11-29 03:51:40 +01:00
|
|
|
} elsif (/^${_esc_color}(?:tree|parent|committer) /) {
|
2006-06-01 11:35:44 +02:00
|
|
|
# ignore
|
2006-11-29 03:51:40 +01:00
|
|
|
} elsif (/^${_esc_color}:\d{6} \d{6} $sha1_short/o) {
|
2006-06-01 11:35:44 +02:00
|
|
|
push @{$c->{raw}}, $_;
|
2006-11-29 03:51:40 +01:00
|
|
|
} elsif (/^${_esc_color}[ACRMDT]\t/) {
|
2006-11-25 07:38:17 +01:00
|
|
|
# we could add $SVN->{svn_path} here, but that requires
|
2006-10-11 20:53:22 +02:00
|
|
|
# remote access at the moment (repo_path_split)...
|
2006-11-29 03:51:40 +01:00
|
|
|
s#^(${_esc_color})([ACRMDT])\t#$1 $2 #;
|
2006-10-11 20:53:22 +02:00
|
|
|
push @{$c->{changed}}, $_;
|
2006-11-29 03:51:40 +01:00
|
|
|
} elsif (/^${_esc_color}diff /) {
|
2006-06-01 11:35:44 +02:00
|
|
|
$d = 1;
|
|
|
|
push @{$c->{diff}}, $_;
|
|
|
|
} elsif ($d) {
|
|
|
|
push @{$c->{diff}}, $_;
|
2006-11-29 03:51:40 +01:00
|
|
|
} elsif (/^${_esc_color} (git-svn-id:.+)$/) {
|
2006-10-11 20:53:22 +02:00
|
|
|
($c->{url}, $c->{r}, undef) = extract_metadata($1);
|
2006-11-29 03:51:40 +01:00
|
|
|
} elsif (s/^${_esc_color} //) {
|
2006-06-01 11:35:44 +02:00
|
|
|
push @{$c->{l}}, $_;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if ($c && defined $c->{r} && $c->{r} != $r_last) {
|
|
|
|
$r_last = $c->{r};
|
|
|
|
process_commit($c, $r_min, $r_max, \@k);
|
|
|
|
}
|
|
|
|
if (@k) {
|
|
|
|
my $swap = $r_max;
|
|
|
|
$r_max = $r_min;
|
|
|
|
$r_min = $swap;
|
|
|
|
process_commit($_, $r_min, $r_max) foreach reverse @k;
|
|
|
|
}
|
|
|
|
out:
|
2006-12-15 19:59:54 +01:00
|
|
|
eval { command_close_pipe($log) };
|
2006-06-01 11:35:44 +02:00
|
|
|
print '-' x72,"\n" unless $_incremental || $_oneline;
|
|
|
|
}
|
|
|
|
|
2006-06-28 04:39:12 +02:00
|
|
|
sub commit_diff_usage {
|
|
|
|
print STDERR "Usage: $0 commit-diff <tree-ish> <tree-ish> [<URL>]\n";
|
|
|
|
exit 1
|
|
|
|
}
|
|
|
|
|
|
|
|
sub commit_diff {
|
|
|
|
my $ta = shift or commit_diff_usage();
|
|
|
|
my $tb = shift or commit_diff_usage();
|
|
|
|
if (!eval { $SVN_URL = shift || file_to_s("$GIT_SVN_DIR/info/url") }) {
|
|
|
|
print STDERR "Needed URL or usable git-svn id command-line\n";
|
|
|
|
commit_diff_usage();
|
|
|
|
}
|
2006-11-23 23:54:04 +01:00
|
|
|
my $r = shift;
|
|
|
|
unless (defined $r) {
|
|
|
|
if (defined $_revision) {
|
|
|
|
$r = $_revision
|
|
|
|
} else {
|
|
|
|
die "-r|--revision is a required argument\n";
|
|
|
|
}
|
|
|
|
}
|
2006-06-28 04:39:12 +02:00
|
|
|
if (defined $_message && defined $_file) {
|
|
|
|
print STDERR "Both --message/-m and --file/-F specified ",
|
|
|
|
"for the commit message.\n",
|
|
|
|
"I have no idea what you mean\n";
|
|
|
|
exit 1;
|
|
|
|
}
|
|
|
|
if (defined $_file) {
|
2006-07-10 05:20:48 +02:00
|
|
|
$_message = file_to_s($_file);
|
2006-06-28 04:39:12 +02:00
|
|
|
} else {
|
|
|
|
$_message ||= get_commit_message($tb,
|
|
|
|
"$GIT_DIR/.svn-commit.tmp.$$")->{msg};
|
|
|
|
}
|
2006-11-25 07:38:17 +01:00
|
|
|
$SVN ||= libsvn_connect($SVN_URL);
|
2006-11-09 10:19:37 +01:00
|
|
|
if ($r eq 'HEAD') {
|
|
|
|
$r = $SVN->get_latest_revnum;
|
|
|
|
} elsif ($r !~ /^\d+$/) {
|
|
|
|
die "revision argument: $r not understood by git-svn\n";
|
|
|
|
}
|
2006-06-28 04:39:12 +02:00
|
|
|
my @lock = $SVN::Core::VERSION ge '1.2.0' ? (undef, 0) : ();
|
2006-11-09 10:19:37 +01:00
|
|
|
my $rev_committed;
|
|
|
|
my $ed = SVN::Git::Editor->new({ r => $r,
|
2006-11-25 07:38:17 +01:00
|
|
|
ra => libsvn_dup_ra($SVN),
|
|
|
|
c => $tb,
|
|
|
|
svn_path => $SVN->{svn_path}
|
2006-06-28 04:39:12 +02:00
|
|
|
},
|
|
|
|
$SVN->get_commit_editor($_message,
|
2006-11-09 10:19:37 +01:00
|
|
|
sub {
|
|
|
|
$rev_committed = $_[0];
|
|
|
|
print "Committed $_[0]\n";
|
|
|
|
}, @lock)
|
2006-06-28 04:39:12 +02:00
|
|
|
);
|
2006-11-25 07:38:18 +01:00
|
|
|
eval {
|
|
|
|
my $mods = libsvn_checkout_tree($ta, $tb, $ed);
|
|
|
|
if (@$mods == 0) {
|
|
|
|
print "No changes\n$ta == $tb\n";
|
|
|
|
$ed->abort_edit;
|
|
|
|
} else {
|
|
|
|
$ed->close_edit;
|
|
|
|
}
|
|
|
|
};
|
|
|
|
fatal "$@\n" if $@;
|
2006-08-26 18:52:25 +02:00
|
|
|
$_message = $_file = undef;
|
2006-11-09 10:19:37 +01:00
|
|
|
return $rev_committed;
|
2006-06-28 04:39:12 +02:00
|
|
|
}
|
|
|
|
|
2006-02-16 10:24:16 +01:00
|
|
|
########################### utility functions #########################
|
|
|
|
|
2006-06-16 03:48:22 +02:00
|
|
|
sub cmt_showable {
|
|
|
|
my ($c) = @_;
|
|
|
|
return 1 if defined $c->{r};
|
|
|
|
if ($c->{l} && $c->{l}->[-1] eq "...\n" &&
|
|
|
|
$c->{a_raw} =~ /\@([a-f\d\-]+)>$/) {
|
2006-12-15 19:59:54 +01:00
|
|
|
my @msg = command(qw/cat-file commit/, $c->{c});
|
2006-06-16 03:48:22 +02:00
|
|
|
shift @msg while ($msg[0] ne "\n");
|
|
|
|
shift @msg;
|
|
|
|
@{$c->{l}} = grep !/^git-svn-id: /, @msg;
|
|
|
|
|
|
|
|
(undef, $c->{r}, undef) = extract_metadata(
|
|
|
|
(grep(/^git-svn-id: /, @msg))[-1]);
|
|
|
|
}
|
|
|
|
return defined $c->{r};
|
|
|
|
}
|
|
|
|
|
2006-11-29 03:51:40 +01:00
|
|
|
sub log_use_color {
|
|
|
|
return 1 if $_color;
|
2006-12-14 00:58:41 +01:00
|
|
|
my ($dc, $dcvar);
|
|
|
|
$dcvar = 'color.diff';
|
|
|
|
$dc = `git-repo-config --get $dcvar`;
|
|
|
|
if ($dc eq '') {
|
|
|
|
# nothing at all; fallback to "diff.color"
|
|
|
|
$dcvar = 'diff.color';
|
|
|
|
$dc = `git-repo-config --get $dcvar`;
|
|
|
|
}
|
|
|
|
chomp($dc);
|
2006-11-29 03:51:40 +01:00
|
|
|
if ($dc eq 'auto') {
|
2006-12-14 00:58:41 +01:00
|
|
|
my $pc;
|
|
|
|
$pc = `git-repo-config --get color.pager`;
|
|
|
|
if ($pc eq '') {
|
|
|
|
# does not have it -- fallback to pager.color
|
|
|
|
$pc = `git-repo-config --bool --get pager.color`;
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
$pc = `git-repo-config --bool --get color.pager`;
|
|
|
|
if ($?) {
|
|
|
|
$pc = 'false';
|
|
|
|
}
|
|
|
|
}
|
|
|
|
chomp($pc);
|
|
|
|
if (-t *STDOUT || (defined $_pager && $pc eq 'true')) {
|
2006-11-29 03:51:40 +01:00
|
|
|
return ($ENV{TERM} && $ENV{TERM} ne 'dumb');
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
return 0 if $dc eq 'never';
|
|
|
|
return 1 if $dc eq 'always';
|
2006-12-14 00:58:41 +01:00
|
|
|
chomp($dc = `git-repo-config --bool --get $dcvar`);
|
|
|
|
return ($dc eq 'true');
|
2006-11-29 03:51:40 +01:00
|
|
|
}
|
|
|
|
|
2006-06-16 03:48:22 +02:00
|
|
|
sub git_svn_log_cmd {
|
|
|
|
my ($r_min, $r_max) = @_;
|
2006-12-15 19:59:54 +01:00
|
|
|
my @cmd = (qw/log --abbrev-commit --pretty=raw
|
2006-06-16 03:48:22 +02:00
|
|
|
--default/, "refs/remotes/$GIT_SVN");
|
2006-10-11 20:53:22 +02:00
|
|
|
push @cmd, '-r' unless $_non_recursive;
|
|
|
|
push @cmd, qw/--raw --name-status/ if $_verbose;
|
2006-11-29 03:51:40 +01:00
|
|
|
push @cmd, '--color' if log_use_color();
|
2006-06-16 03:48:22 +02:00
|
|
|
return @cmd unless defined $r_max;
|
|
|
|
if ($r_max == $r_min) {
|
|
|
|
push @cmd, '--max-count=1';
|
|
|
|
if (my $c = revdb_get($REVDB, $r_max)) {
|
|
|
|
push @cmd, $c;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
my ($c_min, $c_max);
|
|
|
|
$c_max = revdb_get($REVDB, $r_max);
|
|
|
|
$c_min = revdb_get($REVDB, $r_min);
|
2006-10-11 20:53:22 +02:00
|
|
|
if (defined $c_min && defined $c_max) {
|
2006-06-16 03:48:22 +02:00
|
|
|
if ($r_max > $r_max) {
|
|
|
|
push @cmd, "$c_min..$c_max";
|
|
|
|
} else {
|
|
|
|
push @cmd, "$c_max..$c_min";
|
|
|
|
}
|
|
|
|
} elsif ($r_max > $r_min) {
|
|
|
|
push @cmd, $c_max;
|
|
|
|
} else {
|
|
|
|
push @cmd, $c_min;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return @cmd;
|
|
|
|
}
|
|
|
|
|
2006-06-15 21:50:12 +02:00
|
|
|
sub fetch_child_id {
|
|
|
|
my $id = shift;
|
|
|
|
print "Fetching $id\n";
|
|
|
|
my $ref = "$GIT_DIR/refs/remotes/$id";
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 04:39:13 +02:00
|
|
|
defined(my $pid = open my $fh, '-|') or croak $!;
|
2006-06-15 21:50:12 +02:00
|
|
|
if (!$pid) {
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 04:39:13 +02:00
|
|
|
$_repack = undef;
|
2006-06-15 21:50:12 +02:00
|
|
|
$GIT_SVN = $ENV{GIT_SVN_ID} = $id;
|
|
|
|
init_vars();
|
|
|
|
fetch(@_);
|
|
|
|
exit 0;
|
|
|
|
}
|
|
|
|
while (<$fh>) {
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 04:39:13 +02:00
|
|
|
print $_;
|
|
|
|
check_repack() if (/^r\d+ = $sha1/);
|
2006-06-15 21:50:12 +02:00
|
|
|
}
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 04:39:13 +02:00
|
|
|
close $fh or croak $?;
|
2006-06-15 21:50:12 +02:00
|
|
|
}
|
|
|
|
|
2006-06-13 00:53:13 +02:00
|
|
|
sub rec_fetch {
|
|
|
|
my ($pfx, $p, @args) = @_;
|
|
|
|
my @dir;
|
|
|
|
foreach (sort <$p/*>) {
|
|
|
|
if (-r "$_/info/url") {
|
|
|
|
$pfx .= '/' if $pfx && $pfx !~ m!/$!;
|
|
|
|
my $id = $pfx . basename $_;
|
|
|
|
next if $id eq 'trunk';
|
2006-06-15 21:50:12 +02:00
|
|
|
fetch_child_id($id, @args);
|
2006-06-13 00:53:13 +02:00
|
|
|
} elsif (-d $_) {
|
|
|
|
push @dir, $_;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
foreach (@dir) {
|
|
|
|
my $x = $_;
|
|
|
|
$x =~ s!^\Q$GIT_DIR\E/svn/!!;
|
|
|
|
rec_fetch($x, $_);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
sub complete_url_ls_init {
|
|
|
|
my ($url, $var, $switch, $pfx) = @_;
|
|
|
|
unless ($var) {
|
|
|
|
print STDERR "W: $switch not specified\n";
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
$var =~ s#/+$##;
|
|
|
|
if ($var !~ m#^[a-z\+]+://#) {
|
|
|
|
$var = '/' . $var if ($var !~ m#^/#);
|
|
|
|
unless ($url) {
|
|
|
|
print STDERR "E: '$var' is not a complete URL ",
|
|
|
|
"and a separate URL is not specified\n";
|
|
|
|
exit 1;
|
|
|
|
}
|
|
|
|
$var = $url . $var;
|
|
|
|
}
|
2006-12-16 08:58:07 +01:00
|
|
|
my @ls = libsvn_ls_fullurl($var);
|
2006-06-13 00:53:13 +02:00
|
|
|
my $old = $GIT_SVN;
|
|
|
|
defined(my $pid = fork) or croak $!;
|
|
|
|
if (!$pid) {
|
|
|
|
foreach my $u (map { "$var/$_" } (grep m!/$!, @ls)) {
|
|
|
|
$u =~ s#/+$##;
|
|
|
|
if ($u !~ m!\Q$var\E/(.+)$!) {
|
|
|
|
print STDERR "W: Unrecognized URL: $u\n";
|
|
|
|
die "This should never happen\n";
|
|
|
|
}
|
2006-10-11 20:53:21 +02:00
|
|
|
# don't try to init already existing refs
|
2006-06-13 00:53:13 +02:00
|
|
|
my $id = $pfx.$1;
|
|
|
|
$GIT_SVN = $ENV{GIT_SVN_ID} = $id;
|
|
|
|
init_vars();
|
2006-10-11 20:53:21 +02:00
|
|
|
unless (-d $GIT_SVN_DIR) {
|
|
|
|
print "init $u => $id\n";
|
|
|
|
init($u);
|
|
|
|
}
|
2006-06-13 00:53:13 +02:00
|
|
|
}
|
|
|
|
exit 0;
|
|
|
|
}
|
|
|
|
waitpid $pid, 0;
|
|
|
|
croak $? if $?;
|
2006-10-11 20:53:21 +02:00
|
|
|
my ($n) = ($switch =~ /^--(\w+)/);
|
2006-12-15 19:59:54 +01:00
|
|
|
command_noisy('repo-config', "svn.$n", $var);
|
2006-06-13 00:53:13 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
sub common_prefix {
|
|
|
|
my $paths = shift;
|
|
|
|
my %common;
|
|
|
|
foreach (@$paths) {
|
|
|
|
my @tmp = split m#/#, $_;
|
|
|
|
my $p = '';
|
|
|
|
while (my $x = shift @tmp) {
|
|
|
|
$p .= "/$x";
|
|
|
|
$common{$p} ||= 0;
|
|
|
|
$common{$p}++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
foreach (sort {length $b <=> length $a} keys %common) {
|
|
|
|
if ($common{$_} == @$paths) {
|
|
|
|
return $_;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return '';
|
|
|
|
}
|
|
|
|
|
2006-06-28 04:39:11 +02:00
|
|
|
# grafts set here are 'stronger' in that they're based on actual tree
|
|
|
|
# matches, and won't be deleted from merge-base checking in write_grafts()
|
|
|
|
sub graft_tree_joins {
|
|
|
|
my $grafts = shift;
|
|
|
|
map_tree_joins() if (@_branch_from && !%tree_map);
|
|
|
|
return unless %tree_map;
|
|
|
|
|
|
|
|
git_svn_each(sub {
|
|
|
|
my $i = shift;
|
2006-12-15 19:59:54 +01:00
|
|
|
my @args = (qw/rev-list --pretty=raw/, "refs/remotes/$i");
|
|
|
|
my ($fh, $ctx) = command_output_pipe(@args);
|
2006-06-28 04:39:11 +02:00
|
|
|
while (<$fh>) {
|
|
|
|
next unless /^commit ($sha1)$/o;
|
|
|
|
my $c = $1;
|
|
|
|
my ($t) = (<$fh> =~ /^tree ($sha1)$/o);
|
|
|
|
next unless $tree_map{$t};
|
|
|
|
|
|
|
|
my $l;
|
|
|
|
do {
|
|
|
|
$l = readline $fh;
|
|
|
|
} until ($l =~ /^committer (?:.+) (\d+) ([\-\+]?\d+)$/);
|
|
|
|
|
|
|
|
my ($s, $tz) = ($1, $2);
|
|
|
|
if ($tz =~ s/^\+//) {
|
|
|
|
$s += tz_to_s_offset($tz);
|
|
|
|
} elsif ($tz =~ s/^\-//) {
|
|
|
|
$s -= tz_to_s_offset($tz);
|
|
|
|
}
|
|
|
|
|
|
|
|
my ($url_a, $r_a, $uuid_a) = cmt_metadata($c);
|
|
|
|
|
|
|
|
foreach my $p (@{$tree_map{$t}}) {
|
|
|
|
next if $p eq $c;
|
2006-12-15 19:59:54 +01:00
|
|
|
my $mb = eval { command('merge-base', $c, $p) };
|
2006-06-28 04:39:11 +02:00
|
|
|
next unless ($@ || $?);
|
|
|
|
if (defined $r_a) {
|
|
|
|
# see if SVN says it's a relative
|
|
|
|
my ($url_b, $r_b, $uuid_b) =
|
|
|
|
cmt_metadata($p);
|
|
|
|
next if (defined $url_b &&
|
|
|
|
defined $url_a &&
|
|
|
|
($url_a eq $url_b) &&
|
|
|
|
($uuid_a eq $uuid_b));
|
|
|
|
if ($uuid_a eq $uuid_b) {
|
|
|
|
if ($r_b < $r_a) {
|
|
|
|
$grafts->{$c}->{$p} = 2;
|
|
|
|
next;
|
|
|
|
} elsif ($r_b > $r_a) {
|
|
|
|
$grafts->{$p}->{$c} = 2;
|
|
|
|
next;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
my $ct = get_commit_time($p);
|
|
|
|
if ($ct < $s) {
|
|
|
|
$grafts->{$c}->{$p} = 2;
|
|
|
|
} elsif ($ct > $s) {
|
|
|
|
$grafts->{$p}->{$c} = 2;
|
|
|
|
}
|
|
|
|
# what should we do when $ct == $s ?
|
|
|
|
}
|
|
|
|
}
|
2006-12-15 19:59:54 +01:00
|
|
|
command_close_pipe($fh, $ctx);
|
2006-06-28 04:39:11 +02:00
|
|
|
});
|
|
|
|
}
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
sub graft_file_copy_lib {
|
|
|
|
my ($grafts, $l_map, $u) = @_;
|
|
|
|
my $tree_paths = $l_map->{$u};
|
|
|
|
my $pfx = common_prefix([keys %$tree_paths]);
|
|
|
|
my ($repo, $path) = repo_path_split($u.$pfx);
|
2006-11-25 07:38:17 +01:00
|
|
|
$SVN = libsvn_connect($repo);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
|
|
|
|
my ($base, $head) = libsvn_parse_revision();
|
|
|
|
my $inc = 1000;
|
|
|
|
my ($min, $max) = ($base, $head < $base+$inc ? $head : $base+$inc);
|
2006-06-13 13:02:23 +02:00
|
|
|
my $eh = $SVN::Error::handler;
|
|
|
|
$SVN::Error::handler = \&libsvn_skip_unknown_revs;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
while (1) {
|
|
|
|
my $pool = SVN::Pool->new;
|
2006-11-25 07:38:17 +01:00
|
|
|
libsvn_get_log(libsvn_dup_ra($SVN), [$path],
|
2006-11-28 11:50:17 +01:00
|
|
|
$min, $max, 0, 2, 1,
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
sub {
|
|
|
|
libsvn_graft_file_copies($grafts, $tree_paths,
|
|
|
|
$path, @_);
|
|
|
|
}, $pool);
|
|
|
|
$pool->clear;
|
|
|
|
last if ($max >= $head);
|
|
|
|
$min = $max + 1;
|
|
|
|
$max += $inc;
|
|
|
|
$max = $head if ($max > $head);
|
|
|
|
}
|
2006-06-13 13:02:23 +02:00
|
|
|
$SVN::Error::handler = $eh;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
}
|
|
|
|
|
2006-06-13 00:53:13 +02:00
|
|
|
sub process_merge_msg_matches {
|
|
|
|
my ($grafts, $l_map, $u, $p, $c, @matches) = @_;
|
|
|
|
my (@strong, @weak);
|
|
|
|
foreach (@matches) {
|
|
|
|
# merging with ourselves is not interesting
|
|
|
|
next if $_ eq $p;
|
|
|
|
if ($l_map->{$u}->{$_}) {
|
|
|
|
push @strong, $_;
|
|
|
|
} else {
|
|
|
|
push @weak, $_;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
foreach my $w (@weak) {
|
|
|
|
last if @strong;
|
|
|
|
# no exact match, use branch name as regexp.
|
|
|
|
my $re = qr/\Q$w\E/i;
|
|
|
|
foreach (keys %{$l_map->{$u}}) {
|
|
|
|
if (/$re/) {
|
2006-06-28 04:39:11 +02:00
|
|
|
push @strong, $l_map->{$u}->{$_};
|
2006-06-13 00:53:13 +02:00
|
|
|
last;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
last if @strong;
|
|
|
|
$w = basename($w);
|
|
|
|
$re = qr/\Q$w\E/i;
|
|
|
|
foreach (keys %{$l_map->{$u}}) {
|
|
|
|
if (/$re/) {
|
2006-06-28 04:39:11 +02:00
|
|
|
push @strong, $l_map->{$u}->{$_};
|
2006-06-13 00:53:13 +02:00
|
|
|
last;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
my ($rev) = ($c->{m} =~ /^git-svn-id:\s(?:\S+?)\@(\d+)
|
|
|
|
\s(?:[a-f\d\-]+)$/xsm);
|
|
|
|
unless (defined $rev) {
|
|
|
|
($rev) = ($c->{m} =~/^git-svn-id:\s(\d+)
|
|
|
|
\@(?:[a-f\d\-]+)/xsm);
|
|
|
|
return unless defined $rev;
|
|
|
|
}
|
|
|
|
foreach my $m (@strong) {
|
2006-06-28 04:39:11 +02:00
|
|
|
my ($r0, $s0) = find_rev_before($rev, $m, 1);
|
2006-06-13 00:53:13 +02:00
|
|
|
$grafts->{$c->{c}}->{$s0} = 1 if defined $s0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
sub graft_merge_msg {
|
|
|
|
my ($grafts, $l_map, $u, $p, @re) = @_;
|
|
|
|
|
|
|
|
my $x = $l_map->{$u}->{$p};
|
|
|
|
my $rl = rev_list_raw($x);
|
|
|
|
while (my $c = next_rev_list_entry($rl)) {
|
|
|
|
foreach my $re (@re) {
|
|
|
|
my (@br) = ($c->{m} =~ /$re/g);
|
|
|
|
next unless @br;
|
|
|
|
process_merge_msg_matches($grafts,$l_map,$u,$p,$c,@br);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-03-09 12:48:47 +01:00
|
|
|
sub read_uuid {
|
|
|
|
return if $SVN_UUID;
|
2006-12-16 08:58:07 +01:00
|
|
|
my $pool = SVN::Pool->new;
|
|
|
|
$SVN_UUID = $SVN->get_uuid($pool);
|
|
|
|
$pool->clear;
|
2006-05-24 10:22:07 +02:00
|
|
|
}
|
|
|
|
|
2006-12-15 19:59:54 +01:00
|
|
|
sub verify_ref {
|
|
|
|
my ($ref) = @_;
|
|
|
|
eval { command_oneline([ 'rev-parse', $ref ], { STDERR => 0 }) };
|
|
|
|
}
|
|
|
|
|
2006-05-24 10:22:07 +02:00
|
|
|
sub repo_path_split {
|
|
|
|
my $full_url = shift;
|
|
|
|
$full_url =~ s#/+$##;
|
|
|
|
|
|
|
|
foreach (@repo_path_split_cache) {
|
|
|
|
if ($full_url =~ s#$_##) {
|
|
|
|
my $u = $1;
|
|
|
|
$full_url =~ s#^/+##;
|
|
|
|
return ($u, $full_url);
|
|
|
|
}
|
|
|
|
}
|
2006-12-16 08:58:07 +01:00
|
|
|
my $tmp = libsvn_connect($full_url);
|
|
|
|
return ($tmp->{repos_root}, $tmp->{svn_path});
|
2006-03-09 12:48:47 +01:00
|
|
|
}
|
|
|
|
|
2006-02-16 10:24:16 +01:00
|
|
|
sub setup_git_svn {
|
|
|
|
defined $SVN_URL or croak "SVN repository location required\n";
|
|
|
|
unless (-d $GIT_DIR) {
|
|
|
|
croak "GIT_DIR=$GIT_DIR does not exist!\n";
|
|
|
|
}
|
2006-05-24 10:22:07 +02:00
|
|
|
mkpath([$GIT_SVN_DIR]);
|
|
|
|
mkpath(["$GIT_SVN_DIR/info"]);
|
2006-06-13 13:02:23 +02:00
|
|
|
open my $fh, '>>',$REVDB or croak $!;
|
|
|
|
close $fh;
|
2006-05-24 10:22:07 +02:00
|
|
|
s_to_file($SVN_URL,"$GIT_SVN_DIR/info/url");
|
2006-02-16 10:24:16 +01:00
|
|
|
|
|
|
|
}
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
sub get_tree_from_treeish {
|
2006-02-20 19:57:28 +01:00
|
|
|
my ($treeish) = @_;
|
|
|
|
croak "Not a sha1: $treeish\n" unless $treeish =~ /^$sha1$/o;
|
2006-12-15 19:59:54 +01:00
|
|
|
my $type = command_oneline(qw/cat-file -t/, $treeish);
|
2006-02-20 19:57:28 +01:00
|
|
|
my $expected;
|
|
|
|
while ($type eq 'tag') {
|
2006-12-15 19:59:54 +01:00
|
|
|
($treeish, $type) = command(qw/cat-file tag/, $treeish);
|
2006-02-20 19:57:28 +01:00
|
|
|
}
|
|
|
|
if ($type eq 'commit') {
|
2006-12-15 19:59:54 +01:00
|
|
|
$expected = (grep /^tree /, command(qw/cat-file commit/,
|
|
|
|
$treeish))[0];
|
2006-02-20 19:57:28 +01:00
|
|
|
($expected) = ($expected =~ /^tree ($sha1)$/);
|
|
|
|
die "Unable to get tree from $treeish\n" unless $expected;
|
|
|
|
} elsif ($type eq 'tree') {
|
|
|
|
$expected = $treeish;
|
|
|
|
} else {
|
|
|
|
die "$treeish is a $type, expected tree, tag or commit\n";
|
|
|
|
}
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
return $expected;
|
|
|
|
}
|
|
|
|
|
2006-12-15 19:59:54 +01:00
|
|
|
sub get_diff {
|
|
|
|
my ($from, $treeish) = @_;
|
|
|
|
print "diff-tree $from $treeish\n";
|
|
|
|
my @diff_tree = qw(diff-tree -z -r);
|
|
|
|
if ($_cp_similarity) {
|
|
|
|
push @diff_tree, "-C$_cp_similarity";
|
|
|
|
} else {
|
|
|
|
push @diff_tree, '-C';
|
|
|
|
}
|
|
|
|
push @diff_tree, '--find-copies-harder' if $_find_copies_harder;
|
|
|
|
push @diff_tree, "-l$_l" if defined $_l;
|
|
|
|
push @diff_tree, $from, $treeish;
|
|
|
|
my ($diff_fh, $ctx) = command_output_pipe(@diff_tree);
|
2006-02-16 10:24:16 +01:00
|
|
|
local $/ = "\0";
|
|
|
|
my $state = 'meta';
|
|
|
|
my @mods;
|
|
|
|
while (<$diff_fh>) {
|
|
|
|
chomp $_; # this gets rid of the trailing "\0"
|
|
|
|
if ($state eq 'meta' && /^:(\d{6})\s(\d{6})\s
|
|
|
|
$sha1\s($sha1)\s([MTCRAD])\d*$/xo) {
|
|
|
|
push @mods, { mode_a => $1, mode_b => $2,
|
|
|
|
sha1_b => $3, chg => $4 };
|
|
|
|
if ($4 =~ /^(?:C|R)$/) {
|
|
|
|
$state = 'file_a';
|
|
|
|
} else {
|
|
|
|
$state = 'file_b';
|
|
|
|
}
|
|
|
|
} elsif ($state eq 'file_a') {
|
2006-02-20 19:57:28 +01:00
|
|
|
my $x = $mods[$#mods] or croak "Empty array\n";
|
2006-02-16 10:24:16 +01:00
|
|
|
if ($x->{chg} !~ /^(?:C|R)$/) {
|
2006-02-20 19:57:28 +01:00
|
|
|
croak "Error parsing $_, $x->{chg}\n";
|
2006-02-16 10:24:16 +01:00
|
|
|
}
|
|
|
|
$x->{file_a} = $_;
|
|
|
|
$state = 'file_b';
|
|
|
|
} elsif ($state eq 'file_b') {
|
2006-02-20 19:57:28 +01:00
|
|
|
my $x = $mods[$#mods] or croak "Empty array\n";
|
2006-02-16 10:24:16 +01:00
|
|
|
if (exists $x->{file_a} && $x->{chg} !~ /^(?:C|R)$/) {
|
2006-02-20 19:57:28 +01:00
|
|
|
croak "Error parsing $_, $x->{chg}\n";
|
2006-02-16 10:24:16 +01:00
|
|
|
}
|
|
|
|
if (!exists $x->{file_a} && $x->{chg} =~ /^(?:C|R)$/) {
|
2006-02-20 19:57:28 +01:00
|
|
|
croak "Error parsing $_, $x->{chg}\n";
|
2006-02-16 10:24:16 +01:00
|
|
|
}
|
|
|
|
$x->{file_b} = $_;
|
|
|
|
$state = 'meta';
|
|
|
|
} else {
|
2006-02-20 19:57:28 +01:00
|
|
|
croak "Error parsing $_\n";
|
2006-02-16 10:24:16 +01:00
|
|
|
}
|
|
|
|
}
|
2006-12-15 19:59:54 +01:00
|
|
|
command_close_pipe($diff_fh, $ctx);
|
2006-02-16 10:24:16 +01:00
|
|
|
return \@mods;
|
|
|
|
}
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
sub libsvn_checkout_tree {
|
2006-06-13 13:02:23 +02:00
|
|
|
my ($from, $treeish, $ed) = @_;
|
|
|
|
my $mods = get_diff($from, $treeish);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
return $mods unless (scalar @$mods);
|
|
|
|
my %o = ( D => 1, R => 0, C => -1, A => 3, M => 3, T => 3 );
|
|
|
|
foreach my $m (sort { $o{$a->{chg}} <=> $o{$b->{chg}} } @$mods) {
|
|
|
|
my $f = $m->{chg};
|
|
|
|
if (defined $o{$f}) {
|
2006-06-28 04:39:14 +02:00
|
|
|
$ed->$f($m, $_q);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
} else {
|
|
|
|
croak "Invalid change type: $f\n";
|
|
|
|
}
|
|
|
|
}
|
2006-06-28 04:39:14 +02:00
|
|
|
$ed->rmdirs($_q) if $_rmdir;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
return $mods;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub get_commit_message {
|
|
|
|
my ($commit, $commit_msg) = (@_);
|
2006-02-26 11:22:27 +01:00
|
|
|
my %log_msg = ( msg => '' );
|
2006-03-03 10:20:07 +01:00
|
|
|
open my $msg, '>', $commit_msg or croak $!;
|
2006-02-16 10:24:16 +01:00
|
|
|
|
2006-12-15 19:59:54 +01:00
|
|
|
my $type = command_oneline(qw/cat-file -t/, $commit);
|
2006-07-10 05:20:48 +02:00
|
|
|
if ($type eq 'commit' || $type eq 'tag') {
|
2006-12-15 19:59:54 +01:00
|
|
|
my ($msg_fh, $ctx) = command_output_pipe('cat-file',
|
|
|
|
$type, $commit);
|
2006-02-16 10:24:16 +01:00
|
|
|
my $in_msg = 0;
|
|
|
|
while (<$msg_fh>) {
|
|
|
|
if (!$in_msg) {
|
|
|
|
$in_msg = 1 if (/^\s*$/);
|
2006-03-03 10:20:08 +01:00
|
|
|
} elsif (/^git-svn-id: /) {
|
|
|
|
# skip this, we regenerate the correct one
|
|
|
|
# on re-fetch anyways
|
2006-02-16 10:24:16 +01:00
|
|
|
} else {
|
|
|
|
print $msg $_ or croak $!;
|
|
|
|
}
|
|
|
|
}
|
2006-12-15 19:59:54 +01:00
|
|
|
command_close_pipe($msg_fh, $ctx);
|
2006-02-16 10:24:16 +01:00
|
|
|
}
|
|
|
|
close $msg or croak $!;
|
|
|
|
|
|
|
|
if ($_edit || ($type eq 'tree')) {
|
|
|
|
my $editor = $ENV{VISUAL} || $ENV{EDITOR} || 'vi';
|
|
|
|
system($editor, $commit_msg);
|
|
|
|
}
|
2006-03-03 10:20:07 +01:00
|
|
|
|
|
|
|
# file_to_s removes all trailing newlines, so just use chomp() here:
|
|
|
|
open $msg, '<', $commit_msg or croak $!;
|
|
|
|
{ local $/; chomp($log_msg{msg} = <$msg>); }
|
|
|
|
close $msg or croak $!;
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
return \%log_msg;
|
|
|
|
}
|
|
|
|
|
2006-06-28 04:39:12 +02:00
|
|
|
sub set_svn_commit_env {
|
|
|
|
if (defined $LC_ALL) {
|
|
|
|
$ENV{LC_ALL} = $LC_ALL;
|
|
|
|
} else {
|
|
|
|
delete $ENV{LC_ALL};
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-06-13 00:53:13 +02:00
|
|
|
sub rev_list_raw {
|
2006-12-15 19:59:54 +01:00
|
|
|
my ($fh, $c) = command_output_pipe(qw/rev-list --pretty=raw/, @_);
|
|
|
|
return { fh => $fh, ctx => $c, t => { } };
|
2006-06-13 00:53:13 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
sub next_rev_list_entry {
|
|
|
|
my $rl = shift;
|
|
|
|
my $fh = $rl->{fh};
|
|
|
|
my $x = $rl->{t};
|
|
|
|
while (<$fh>) {
|
|
|
|
if (/^commit ($sha1)$/o) {
|
|
|
|
if ($x->{c}) {
|
|
|
|
$rl->{t} = { c => $1 };
|
|
|
|
return $x;
|
|
|
|
} else {
|
|
|
|
$x->{c} = $1;
|
|
|
|
}
|
|
|
|
} elsif (/^parent ($sha1)$/o) {
|
|
|
|
$x->{p}->{$1} = 1;
|
|
|
|
} elsif (s/^ //) {
|
|
|
|
$x->{m} ||= '';
|
|
|
|
$x->{m} .= $_;
|
|
|
|
}
|
|
|
|
}
|
2006-12-15 19:59:54 +01:00
|
|
|
command_close_pipe($fh, $rl->{ctx});
|
2006-06-13 00:53:13 +02:00
|
|
|
return ($x != $rl->{t}) ? $x : undef;
|
|
|
|
}
|
|
|
|
|
2006-02-16 10:24:16 +01:00
|
|
|
sub s_to_file {
|
|
|
|
my ($str, $file, $mode) = @_;
|
|
|
|
open my $fd,'>',$file or croak $!;
|
|
|
|
print $fd $str,"\n" or croak $!;
|
|
|
|
close $fd or croak $!;
|
|
|
|
chmod ($mode &~ umask, $file) if (defined $mode);
|
|
|
|
}
|
|
|
|
|
|
|
|
sub file_to_s {
|
|
|
|
my $file = shift;
|
|
|
|
open my $fd,'<',$file or croak "$!: file: $file\n";
|
|
|
|
local $/;
|
|
|
|
my $ret = <$fd>;
|
|
|
|
close $fd or croak $!;
|
|
|
|
$ret =~ s/\s*$//s;
|
|
|
|
return $ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub assert_revision_unknown {
|
2006-06-13 13:02:23 +02:00
|
|
|
my $r = shift;
|
|
|
|
if (my $c = revdb_get($REVDB, $r)) {
|
|
|
|
croak "$r = $c already exists! Why are we refetching it?";
|
2006-02-16 10:24:16 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
sub git_commit {
|
|
|
|
my ($log_msg, @parents) = @_;
|
|
|
|
assert_revision_unknown($log_msg->{revision});
|
2006-03-03 10:20:07 +01:00
|
|
|
map_tree_joins() if (@_branch_from && !%tree_map);
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
my (@tmp_parents, @exec_parents, %seen_parent);
|
|
|
|
if (my $lparents = $log_msg->{parents}) {
|
|
|
|
@tmp_parents = @$lparents
|
|
|
|
}
|
2006-02-16 10:24:16 +01:00
|
|
|
# commit parents can be conditionally bound to a particular
|
|
|
|
# svn revision via: "svn_revno=commit_sha1", filter them out here:
|
|
|
|
foreach my $p (@parents) {
|
|
|
|
next unless defined $p;
|
|
|
|
if ($p =~ /^(\d+)=($sha1_short)$/o) {
|
|
|
|
if ($1 == $log_msg->{revision}) {
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
push @tmp_parents, $2;
|
2006-02-16 10:24:16 +01:00
|
|
|
}
|
|
|
|
} else {
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
push @tmp_parents, $p if $p =~ /$sha1_short/o;
|
2006-02-16 10:24:16 +01:00
|
|
|
}
|
|
|
|
}
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
my $tree = $log_msg->{tree};
|
|
|
|
if (!defined $tree) {
|
|
|
|
my $index = set_index($GIT_SVN_INDEX);
|
2006-12-15 19:59:54 +01:00
|
|
|
$tree = command_oneline('write-tree');
|
2006-05-24 10:40:37 +02:00
|
|
|
croak $? if $?;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
restore_index($index);
|
|
|
|
}
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 04:39:13 +02:00
|
|
|
# just in case we clobber the existing ref, we still want that ref
|
|
|
|
# as our parent:
|
2006-12-15 19:59:54 +01:00
|
|
|
if (my $cur = verify_ref("refs/remotes/$GIT_SVN^0")) {
|
2006-12-13 01:45:00 +01:00
|
|
|
chomp $cur;
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 04:39:13 +02:00
|
|
|
push @tmp_parents, $cur;
|
|
|
|
}
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
if (exists $tree_map{$tree}) {
|
2006-06-28 04:39:11 +02:00
|
|
|
foreach my $p (@{$tree_map{$tree}}) {
|
|
|
|
my $skip;
|
|
|
|
foreach (@tmp_parents) {
|
|
|
|
# see if a common parent is found
|
2006-12-15 19:59:54 +01:00
|
|
|
my $mb = eval { command('merge-base', $_, $p) };
|
2006-06-28 04:39:11 +02:00
|
|
|
next if ($@ || $?);
|
|
|
|
$skip = 1;
|
|
|
|
last;
|
|
|
|
}
|
|
|
|
next if $skip;
|
|
|
|
my ($url_p, $r_p, $uuid_p) = cmt_metadata($p);
|
|
|
|
next if (($SVN_UUID eq $uuid_p) &&
|
|
|
|
($log_msg->{revision} > $r_p));
|
|
|
|
next if (defined $url_p && defined $SVN_URL &&
|
|
|
|
($SVN_UUID eq $uuid_p) &&
|
|
|
|
($url_p eq $SVN_URL));
|
|
|
|
push @tmp_parents, $p;
|
|
|
|
}
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
}
|
|
|
|
foreach (@tmp_parents) {
|
|
|
|
next if $seen_parent{$_};
|
|
|
|
$seen_parent{$_} = 1;
|
|
|
|
push @exec_parents, $_;
|
|
|
|
# MAXPARENT is defined to 16 in commit-tree.c:
|
|
|
|
last if @exec_parents > 16;
|
|
|
|
}
|
|
|
|
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 04:39:13 +02:00
|
|
|
set_commit_env($log_msg);
|
|
|
|
my @exec = ('git-commit-tree', $tree);
|
|
|
|
push @exec, '-p', $_ foreach @exec_parents;
|
|
|
|
defined(my $pid = open3(my $msg_fh, my $out_fh, '>&STDERR', @exec))
|
|
|
|
or croak $!;
|
|
|
|
print $msg_fh $log_msg->{msg} or croak $!;
|
|
|
|
unless ($_no_metadata) {
|
|
|
|
print $msg_fh "\ngit-svn-id: $SVN_URL\@$log_msg->{revision}",
|
2006-03-03 10:20:09 +01:00
|
|
|
" $SVN_UUID\n" or croak $!;
|
2006-02-16 10:24:16 +01:00
|
|
|
}
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 04:39:13 +02:00
|
|
|
$msg_fh->flush == 0 or croak $!;
|
|
|
|
close $msg_fh or croak $!;
|
2006-02-16 10:24:16 +01:00
|
|
|
chomp(my $commit = do { local $/; <$out_fh> });
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 04:39:13 +02:00
|
|
|
close $out_fh or croak $!;
|
|
|
|
waitpid $pid, 0;
|
|
|
|
croak $? if $?;
|
2006-02-16 10:24:16 +01:00
|
|
|
if ($commit !~ /^$sha1$/o) {
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 04:39:13 +02:00
|
|
|
die "Failed to commit, invalid sha1: $commit\n";
|
2006-02-16 10:24:16 +01:00
|
|
|
}
|
2006-12-15 19:59:54 +01:00
|
|
|
command_noisy('update-ref',"refs/remotes/$GIT_SVN",$commit);
|
2006-06-13 13:02:23 +02:00
|
|
|
revdb_set($REVDB, $log_msg->{revision}, $commit);
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
# this output is read via pipe, do not change:
|
2006-02-16 10:24:16 +01:00
|
|
|
print "r$log_msg->{revision} = $commit\n";
|
2006-06-15 21:50:12 +02:00
|
|
|
check_repack();
|
|
|
|
return $commit;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub check_repack {
|
2006-05-24 11:07:32 +02:00
|
|
|
if ($_repack && (--$_repack_nr == 0)) {
|
|
|
|
$_repack_nr = $_repack;
|
2006-12-15 19:59:54 +01:00
|
|
|
# repack doesn't use any arguments with spaces in them, does it?
|
|
|
|
command_noisy('repack', split(/\s+/, $_repack_flags));
|
2006-05-24 11:07:32 +02:00
|
|
|
}
|
2006-02-16 10:24:16 +01:00
|
|
|
}
|
|
|
|
|
2006-03-03 10:20:08 +01:00
|
|
|
sub set_commit_env {
|
2006-03-03 10:20:09 +01:00
|
|
|
my ($log_msg) = @_;
|
2006-03-03 10:20:08 +01:00
|
|
|
my $author = $log_msg->{author};
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
if (!defined $author || length $author == 0) {
|
|
|
|
$author = '(no author)';
|
|
|
|
}
|
2006-03-03 10:20:08 +01:00
|
|
|
my ($name,$email) = defined $users{$author} ? @{$users{$author}}
|
2006-03-03 10:20:09 +01:00
|
|
|
: ($author,"$author\@$SVN_UUID");
|
2006-03-03 10:20:08 +01:00
|
|
|
$ENV{GIT_AUTHOR_NAME} = $ENV{GIT_COMMITTER_NAME} = $name;
|
|
|
|
$ENV{GIT_AUTHOR_EMAIL} = $ENV{GIT_COMMITTER_EMAIL} = $email;
|
|
|
|
$ENV{GIT_AUTHOR_DATE} = $ENV{GIT_COMMITTER_DATE} = $log_msg->{date};
|
|
|
|
}
|
|
|
|
|
2006-03-02 06:58:31 +01:00
|
|
|
sub check_upgrade_needed {
|
2006-06-15 21:50:12 +02:00
|
|
|
if (!-r $REVDB) {
|
2006-06-16 11:55:13 +02:00
|
|
|
-d $GIT_SVN_DIR or mkpath([$GIT_SVN_DIR]);
|
2006-06-15 21:50:12 +02:00
|
|
|
open my $fh, '>>',$REVDB or croak $!;
|
|
|
|
close $fh;
|
|
|
|
}
|
2006-12-15 19:59:54 +01:00
|
|
|
return unless eval {
|
|
|
|
command([qw/rev-parse --verify/,"$GIT_SVN-HEAD^0"],
|
|
|
|
{STDERR => 0});
|
2006-03-02 06:58:31 +01:00
|
|
|
};
|
2006-12-15 19:59:54 +01:00
|
|
|
my $head = eval { command('rev-parse',"refs/remotes/$GIT_SVN") };
|
2006-03-02 06:58:31 +01:00
|
|
|
if ($@ || !$head) {
|
|
|
|
print STDERR "Please run: $0 rebuild --upgrade\n";
|
|
|
|
exit 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-03-03 10:20:07 +01:00
|
|
|
# fills %tree_map with a reverse mapping of trees to commits. Useful
|
|
|
|
# for finding parents to commit on.
|
|
|
|
sub map_tree_joins {
|
2006-04-28 12:51:16 +02:00
|
|
|
my %seen;
|
2006-03-03 10:20:07 +01:00
|
|
|
foreach my $br (@_branch_from) {
|
2006-12-15 19:59:54 +01:00
|
|
|
my $pipe = command_output_pipe(qw/rev-list
|
|
|
|
--topo-order --pretty=raw/, $br);
|
2006-03-03 10:20:07 +01:00
|
|
|
while (<$pipe>) {
|
|
|
|
if (/^commit ($sha1)$/o) {
|
|
|
|
my $commit = $1;
|
2006-04-28 12:51:16 +02:00
|
|
|
|
|
|
|
# if we've seen a commit,
|
|
|
|
# we've seen its parents
|
|
|
|
last if $seen{$commit};
|
2006-03-03 10:20:07 +01:00
|
|
|
my ($tree) = (<$pipe> =~ /^tree ($sha1)$/o);
|
|
|
|
unless (defined $tree) {
|
|
|
|
die "Failed to parse commit $commit\n";
|
|
|
|
}
|
|
|
|
push @{$tree_map{$tree}}, $commit;
|
2006-04-28 12:51:16 +02:00
|
|
|
$seen{$commit} = 1;
|
2006-03-03 10:20:07 +01:00
|
|
|
}
|
|
|
|
}
|
2006-12-15 19:59:54 +01:00
|
|
|
eval { command_close_pipe($pipe) };
|
2006-03-03 10:20:07 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-04-28 12:42:38 +02:00
|
|
|
sub load_all_refs {
|
|
|
|
if (@_branch_from) {
|
|
|
|
print STDERR '--branch|-b parameters are ignored when ',
|
|
|
|
"--branch-all-refs|-B is passed\n";
|
|
|
|
}
|
|
|
|
|
|
|
|
# don't worry about rev-list on non-commit objects/tags,
|
|
|
|
# it shouldn't blow up if a ref is a blob or tree...
|
2006-12-15 19:59:54 +01:00
|
|
|
@_branch_from = command(qw/rev-parse --symbolic --all/);
|
2006-04-28 12:42:38 +02:00
|
|
|
}
|
|
|
|
|
2006-03-03 10:20:08 +01:00
|
|
|
# '<svn username> = real-name <email address>' mapping based on git-svnimport:
|
|
|
|
sub load_authors {
|
|
|
|
open my $authors, '<', $_authors or die "Can't open $_authors $!\n";
|
|
|
|
while (<$authors>) {
|
|
|
|
chomp;
|
2006-09-25 05:04:55 +02:00
|
|
|
next unless /^(\S+?|\(no author\))\s*=\s*(.+?)\s*<(.+)>\s*$/;
|
2006-03-03 10:20:08 +01:00
|
|
|
my ($user, $name, $email) = ($1, $2, $3);
|
|
|
|
$users{$user} = [$name, $email];
|
|
|
|
}
|
|
|
|
close $authors or croak $!;
|
|
|
|
}
|
|
|
|
|
2006-06-01 11:35:44 +02:00
|
|
|
sub rload_authors {
|
|
|
|
open my $authors, '<', $_authors or die "Can't open $_authors $!\n";
|
|
|
|
while (<$authors>) {
|
|
|
|
chomp;
|
|
|
|
next unless /^(\S+?)\s*=\s*(.+?)\s*<(.+)>\s*$/;
|
|
|
|
my ($user, $name, $email) = ($1, $2, $3);
|
|
|
|
$rusers{"$name <$email>"} = $user;
|
|
|
|
}
|
|
|
|
close $authors or croak $!;
|
|
|
|
}
|
|
|
|
|
2006-06-13 00:53:13 +02:00
|
|
|
sub git_svn_each {
|
|
|
|
my $sub = shift;
|
2006-12-15 19:59:54 +01:00
|
|
|
foreach (command(qw/rev-parse --symbolic --all/)) {
|
2006-06-13 00:53:13 +02:00
|
|
|
next unless s#^refs/remotes/##;
|
|
|
|
chomp $_;
|
|
|
|
next unless -f "$GIT_DIR/svn/$_/info/url";
|
|
|
|
&$sub($_);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-06-13 13:02:23 +02:00
|
|
|
sub migrate_revdb {
|
|
|
|
git_svn_each(sub {
|
|
|
|
my $id = shift;
|
|
|
|
defined(my $pid = fork) or croak $!;
|
|
|
|
if (!$pid) {
|
|
|
|
$GIT_SVN = $ENV{GIT_SVN_ID} = $id;
|
|
|
|
init_vars();
|
|
|
|
exit 0 if -r $REVDB;
|
|
|
|
print "Upgrading svn => git mapping...\n";
|
2006-06-16 11:55:13 +02:00
|
|
|
-d $GIT_SVN_DIR or mkpath([$GIT_SVN_DIR]);
|
2006-06-13 13:02:23 +02:00
|
|
|
open my $fh, '>>',$REVDB or croak $!;
|
|
|
|
close $fh;
|
|
|
|
rebuild();
|
|
|
|
print "Done upgrading. You may now delete the ",
|
|
|
|
"deprecated $GIT_SVN_DIR/revs directory\n";
|
|
|
|
exit 0;
|
|
|
|
}
|
|
|
|
waitpid $pid, 0;
|
|
|
|
croak $? if $?;
|
|
|
|
});
|
|
|
|
}
|
|
|
|
|
2006-05-24 10:22:07 +02:00
|
|
|
sub migration_check {
|
2006-06-13 13:02:23 +02:00
|
|
|
migrate_revdb() unless (-e $REVDB);
|
2006-05-24 10:22:07 +02:00
|
|
|
return if (-d "$GIT_DIR/svn" || !-d $GIT_DIR);
|
|
|
|
print "Upgrading repository...\n";
|
|
|
|
unless (-d "$GIT_DIR/svn") {
|
|
|
|
mkdir "$GIT_DIR/svn" or croak $!;
|
|
|
|
}
|
|
|
|
print "Data from a previous version of git-svn exists, but\n\t",
|
|
|
|
"$GIT_SVN_DIR\n\t(required for this version ",
|
|
|
|
"($VERSION) of git-svn) does not.\n";
|
|
|
|
|
2006-12-15 19:59:54 +01:00
|
|
|
foreach my $x (command(qw/rev-parse --symbolic --all/)) {
|
2006-05-24 10:22:07 +02:00
|
|
|
next unless $x =~ s#^refs/remotes/##;
|
|
|
|
chomp $x;
|
|
|
|
next unless -f "$GIT_DIR/$x/info/url";
|
|
|
|
my $u = eval { file_to_s("$GIT_DIR/$x/info/url") };
|
|
|
|
next unless $u;
|
|
|
|
my $dn = dirname("$GIT_DIR/svn/$x");
|
|
|
|
mkpath([$dn]) unless -d $dn;
|
|
|
|
rename "$GIT_DIR/$x", "$GIT_DIR/svn/$x" or croak "$!: $x";
|
|
|
|
}
|
2006-06-13 13:02:23 +02:00
|
|
|
migrate_revdb() if (-d $GIT_SVN_DIR && !-w $REVDB);
|
2006-05-24 10:22:07 +02:00
|
|
|
print "Done upgrading.\n";
|
|
|
|
}
|
|
|
|
|
2006-06-13 00:53:13 +02:00
|
|
|
sub find_rev_before {
|
2006-06-13 13:02:23 +02:00
|
|
|
my ($r, $id, $eq_ok) = @_;
|
|
|
|
my $f = "$GIT_DIR/svn/$id/.rev_db";
|
2006-06-15 21:50:12 +02:00
|
|
|
return (undef,undef) unless -r $f;
|
|
|
|
--$r unless $eq_ok;
|
2006-06-13 13:02:23 +02:00
|
|
|
while ($r > 0) {
|
|
|
|
if (my $c = revdb_get($f, $r)) {
|
|
|
|
return ($r, $c);
|
|
|
|
}
|
|
|
|
--$r;
|
2006-06-13 00:53:13 +02:00
|
|
|
}
|
|
|
|
return (undef, undef);
|
|
|
|
}
|
|
|
|
|
2006-05-24 10:40:37 +02:00
|
|
|
sub init_vars {
|
|
|
|
$GIT_SVN ||= $ENV{GIT_SVN_ID} || 'git-svn';
|
|
|
|
$GIT_SVN_DIR = "$GIT_DIR/svn/$GIT_SVN";
|
2006-06-13 13:02:23 +02:00
|
|
|
$REVDB = "$GIT_SVN_DIR/.rev_db";
|
2006-05-24 10:40:37 +02:00
|
|
|
$GIT_SVN_INDEX = "$GIT_SVN_DIR/index";
|
|
|
|
$SVN_URL = undef;
|
|
|
|
$SVN_WC = "$GIT_SVN_DIR/tree";
|
2006-06-28 04:39:11 +02:00
|
|
|
%tree_map = ();
|
2006-05-24 10:40:37 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
# convert GetOpt::Long specs for use by git-repo-config
|
|
|
|
sub read_repo_config {
|
|
|
|
return unless -d $GIT_DIR;
|
|
|
|
my $opts = shift;
|
|
|
|
foreach my $o (keys %$opts) {
|
|
|
|
my $v = $opts->{$o};
|
|
|
|
my ($key) = ($o =~ /^([a-z\-]+)/);
|
|
|
|
$key =~ s/-//g;
|
|
|
|
my $arg = 'git-repo-config';
|
|
|
|
$arg .= ' --int' if ($o =~ /[:=]i$/);
|
|
|
|
$arg .= ' --bool' if ($o !~ /[:=][sfi]$/);
|
|
|
|
if (ref $v eq 'ARRAY') {
|
|
|
|
chomp(my @tmp = `$arg --get-all svn.$key`);
|
|
|
|
@$v = @tmp if @tmp;
|
|
|
|
} else {
|
|
|
|
chomp(my $tmp = `$arg --get svn.$key`);
|
|
|
|
if ($tmp && !($arg =~ / --bool / && $tmp eq 'false')) {
|
|
|
|
$$v = $tmp;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-05-24 11:07:32 +02:00
|
|
|
sub set_default_vals {
|
|
|
|
if (defined $_repack) {
|
|
|
|
$_repack = 1000 if ($_repack <= 0);
|
|
|
|
$_repack_nr = $_repack;
|
2006-06-15 21:50:12 +02:00
|
|
|
$_repack_flags ||= '-d';
|
2006-05-24 11:07:32 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-06-13 00:53:13 +02:00
|
|
|
sub read_grafts {
|
|
|
|
my $gr_file = shift;
|
|
|
|
my ($grafts, $comments) = ({}, {});
|
|
|
|
if (open my $fh, '<', $gr_file) {
|
|
|
|
my @tmp;
|
|
|
|
while (<$fh>) {
|
|
|
|
if (/^($sha1)\s+/) {
|
|
|
|
my $c = $1;
|
|
|
|
if (@tmp) {
|
|
|
|
@{$comments->{$c}} = @tmp;
|
|
|
|
@tmp = ();
|
|
|
|
}
|
|
|
|
foreach my $p (split /\s+/, $_) {
|
|
|
|
$grafts->{$c}->{$p} = 1;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
push @tmp, $_;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
close $fh or croak $!;
|
|
|
|
@{$comments->{'END'}} = @tmp if @tmp;
|
|
|
|
}
|
|
|
|
return ($grafts, $comments);
|
|
|
|
}
|
|
|
|
|
|
|
|
sub write_grafts {
|
|
|
|
my ($grafts, $comments, $gr_file) = @_;
|
|
|
|
|
|
|
|
open my $fh, '>', $gr_file or croak $!;
|
|
|
|
foreach my $c (sort keys %$grafts) {
|
|
|
|
if ($comments->{$c}) {
|
|
|
|
print $fh $_ foreach @{$comments->{$c}};
|
|
|
|
}
|
|
|
|
my $p = $grafts->{$c};
|
2006-06-28 04:39:11 +02:00
|
|
|
my %x; # real parents
|
2006-06-13 00:53:13 +02:00
|
|
|
delete $p->{$c}; # commits are not self-reproducing...
|
2006-12-15 19:59:54 +01:00
|
|
|
my $ch = command_output_pipe(qw/cat-file commit/, $c);
|
2006-06-13 00:53:13 +02:00
|
|
|
while (<$ch>) {
|
2006-06-28 04:39:11 +02:00
|
|
|
if (/^parent ($sha1)/) {
|
|
|
|
$x{$1} = $p->{$1} = 1;
|
2006-06-13 00:53:13 +02:00
|
|
|
} else {
|
2006-06-28 04:39:11 +02:00
|
|
|
last unless /^\S/;
|
2006-06-13 00:53:13 +02:00
|
|
|
}
|
|
|
|
}
|
2006-12-15 19:59:54 +01:00
|
|
|
eval { command_close_pipe($ch) }; # breaking the pipe
|
2006-06-28 04:39:11 +02:00
|
|
|
|
|
|
|
# if real parents are the only ones in the grafts, drop it
|
|
|
|
next if join(' ',sort keys %$p) eq join(' ',sort keys %x);
|
|
|
|
|
|
|
|
my (@ip, @jp, $mb);
|
|
|
|
my %del = %x;
|
|
|
|
@ip = @jp = keys %$p;
|
|
|
|
foreach my $i (@ip) {
|
|
|
|
next if $del{$i} || $p->{$i} == 2;
|
|
|
|
foreach my $j (@jp) {
|
|
|
|
next if $i eq $j || $del{$j} || $p->{$j} == 2;
|
2006-12-15 19:59:54 +01:00
|
|
|
$mb = eval { command('merge-base', $i, $j) };
|
2006-06-28 04:39:11 +02:00
|
|
|
next unless $mb;
|
|
|
|
chomp $mb;
|
|
|
|
next if $x{$mb};
|
|
|
|
if ($mb eq $j) {
|
|
|
|
delete $p->{$i};
|
|
|
|
$del{$i} = 1;
|
|
|
|
} elsif ($mb eq $i) {
|
|
|
|
delete $p->{$j};
|
|
|
|
$del{$j} = 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
# if real parents are the only ones in the grafts, drop it
|
|
|
|
next if join(' ',sort keys %$p) eq join(' ',sort keys %x);
|
|
|
|
|
2006-06-13 00:53:13 +02:00
|
|
|
print $fh $c, ' ', join(' ', sort keys %$p),"\n";
|
|
|
|
}
|
|
|
|
if ($comments->{'END'}) {
|
|
|
|
print $fh $_ foreach @{$comments->{'END'}};
|
|
|
|
}
|
|
|
|
close $fh or croak $!;
|
|
|
|
}
|
|
|
|
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 04:39:13 +02:00
|
|
|
sub read_url_paths_all {
|
|
|
|
my ($l_map, $pfx, $p) = @_;
|
|
|
|
my @dir;
|
|
|
|
foreach (<$p/*>) {
|
|
|
|
if (-r "$_/info/url") {
|
|
|
|
$pfx .= '/' if $pfx && $pfx !~ m!/$!;
|
|
|
|
my $id = $pfx . basename $_;
|
|
|
|
my $url = file_to_s("$_/info/url");
|
|
|
|
my ($u, $p) = repo_path_split($url);
|
|
|
|
$l_map->{$u}->{$p} = $id;
|
|
|
|
} elsif (-d $_) {
|
|
|
|
push @dir, $_;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
foreach (@dir) {
|
|
|
|
my $x = $_;
|
|
|
|
$x =~ s!^\Q$GIT_DIR\E/svn/!!o;
|
|
|
|
read_url_paths_all($l_map, $x, $_);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
# this one only gets ids that have been imported, not new ones
|
2006-06-13 00:53:13 +02:00
|
|
|
sub read_url_paths {
|
|
|
|
my $l_map = {};
|
|
|
|
git_svn_each(sub { my $x = shift;
|
2006-06-15 06:24:03 +02:00
|
|
|
my $url = file_to_s("$GIT_DIR/svn/$x/info/url");
|
|
|
|
my ($u, $p) = repo_path_split($url);
|
2006-06-13 00:53:13 +02:00
|
|
|
$l_map->{$u}->{$p} = $x;
|
|
|
|
});
|
|
|
|
return $l_map;
|
|
|
|
}
|
|
|
|
|
2006-06-01 11:35:44 +02:00
|
|
|
sub extract_metadata {
|
2006-06-28 04:39:11 +02:00
|
|
|
my $id = shift or return (undef, undef, undef);
|
2006-06-01 11:35:44 +02:00
|
|
|
my ($url, $rev, $uuid) = ($id =~ /^git-svn-id:\s(\S+?)\@(\d+)
|
|
|
|
\s([a-f\d\-]+)$/x);
|
2006-11-23 23:54:04 +01:00
|
|
|
if (!defined $rev || !$uuid || !$url) {
|
2006-06-01 11:35:44 +02:00
|
|
|
# some of the original repositories I made had
|
2006-07-10 07:50:18 +02:00
|
|
|
# identifiers like this:
|
2006-06-01 11:35:44 +02:00
|
|
|
($rev, $uuid) = ($id =~/^git-svn-id:\s(\d+)\@([a-f\d\-]+)/);
|
|
|
|
}
|
|
|
|
return ($url, $rev, $uuid);
|
|
|
|
}
|
|
|
|
|
2006-06-28 04:39:11 +02:00
|
|
|
sub cmt_metadata {
|
|
|
|
return extract_metadata((grep(/^git-svn-id: /,
|
2006-12-15 19:59:54 +01:00
|
|
|
command(qw/cat-file commit/, shift)))[-1]);
|
2006-06-28 04:39:11 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
sub get_commit_time {
|
|
|
|
my $cmt = shift;
|
2006-12-15 19:59:54 +01:00
|
|
|
my $fh = command_output_pipe(qw/rev-list --pretty=raw -n1/, $cmt);
|
2006-06-28 04:39:11 +02:00
|
|
|
while (<$fh>) {
|
|
|
|
/^committer\s(?:.+) (\d+) ([\-\+]?\d+)$/ or next;
|
|
|
|
my ($s, $tz) = ($1, $2);
|
|
|
|
if ($tz =~ s/^\+//) {
|
|
|
|
$s += tz_to_s_offset($tz);
|
|
|
|
} elsif ($tz =~ s/^\-//) {
|
|
|
|
$s -= tz_to_s_offset($tz);
|
|
|
|
}
|
2006-12-15 19:59:54 +01:00
|
|
|
eval { command_close_pipe($fh) };
|
2006-06-28 04:39:11 +02:00
|
|
|
return $s;
|
|
|
|
}
|
|
|
|
die "Can't get commit time for commit: $cmt\n";
|
|
|
|
}
|
|
|
|
|
2006-06-01 11:35:44 +02:00
|
|
|
sub tz_to_s_offset {
|
|
|
|
my ($tz) = @_;
|
|
|
|
$tz =~ s/(\d\d)$//;
|
|
|
|
return ($1 * 60) + ($tz * 3600);
|
|
|
|
}
|
|
|
|
|
2006-11-29 03:51:40 +01:00
|
|
|
# adapted from pager.c
|
|
|
|
sub config_pager {
|
|
|
|
$_pager ||= $ENV{GIT_PAGER} || $ENV{PAGER};
|
|
|
|
if (!defined $_pager) {
|
|
|
|
$_pager = 'less';
|
|
|
|
} elsif (length $_pager == 0 || $_pager eq 'cat') {
|
|
|
|
$_pager = undef;
|
2006-06-01 11:35:44 +02:00
|
|
|
}
|
2006-11-29 03:51:40 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
sub run_pager {
|
|
|
|
return unless -t *STDOUT;
|
2006-06-01 11:35:44 +02:00
|
|
|
pipe my $rfd, my $wfd or return;
|
|
|
|
defined(my $pid = fork) or croak $!;
|
|
|
|
if (!$pid) {
|
|
|
|
open STDOUT, '>&', $wfd or croak $!;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
open STDIN, '<&', $rfd or croak $!;
|
2006-11-29 03:51:40 +01:00
|
|
|
$ENV{LESS} ||= 'FRSX';
|
|
|
|
exec $_pager or croak "Can't run pager: $! ($_pager)\n";
|
2006-06-01 11:35:44 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
sub get_author_info {
|
|
|
|
my ($dest, $author, $t, $tz) = @_;
|
|
|
|
$author =~ s/(?:^\s*|\s*$)//g;
|
2006-06-16 03:48:22 +02:00
|
|
|
$dest->{a_raw} = $author;
|
2006-06-01 11:35:44 +02:00
|
|
|
my $_a;
|
|
|
|
if ($_authors) {
|
|
|
|
$_a = $rusers{$author} || undef;
|
|
|
|
}
|
|
|
|
if (!$_a) {
|
|
|
|
($_a) = ($author =~ /<([^>]+)\@[^>]+>$/);
|
|
|
|
}
|
|
|
|
$dest->{t} = $t;
|
|
|
|
$dest->{tz} = $tz;
|
|
|
|
$dest->{a} = $_a;
|
|
|
|
# Date::Parse isn't in the standard Perl distro :(
|
|
|
|
if ($tz =~ s/^\+//) {
|
|
|
|
$t += tz_to_s_offset($tz);
|
|
|
|
} elsif ($tz =~ s/^\-//) {
|
|
|
|
$t -= tz_to_s_offset($tz);
|
|
|
|
}
|
|
|
|
$dest->{t_utc} = $t;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub process_commit {
|
|
|
|
my ($c, $r_min, $r_max, $defer) = @_;
|
|
|
|
if (defined $r_min && defined $r_max) {
|
|
|
|
if ($r_min == $c->{r} && $r_min == $r_max) {
|
|
|
|
show_commit($c);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
return 1 if $r_min == $r_max;
|
|
|
|
if ($r_min < $r_max) {
|
|
|
|
# we need to reverse the print order
|
|
|
|
return 0 if (defined $_limit && --$_limit < 0);
|
|
|
|
push @$defer, $c;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
if ($r_min != $r_max) {
|
|
|
|
return 1 if ($r_min < $c->{r});
|
|
|
|
return 1 if ($r_max > $c->{r});
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return 0 if (defined $_limit && --$_limit < 0);
|
|
|
|
show_commit($c);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub show_commit {
|
|
|
|
my $c = shift;
|
|
|
|
if ($_oneline) {
|
|
|
|
my $x = "\n";
|
|
|
|
if (my $l = $c->{l}) {
|
|
|
|
while ($l->[0] =~ /^\s*$/) { shift @$l }
|
|
|
|
$x = $l->[0];
|
|
|
|
}
|
|
|
|
$_l_fmt ||= 'A' . length($c->{r});
|
|
|
|
print 'r',pack($_l_fmt, $c->{r}),' | ';
|
|
|
|
print "$c->{c} | " if $_show_commit;
|
|
|
|
print $x;
|
|
|
|
} else {
|
|
|
|
show_commit_normal($c);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-10-11 20:53:22 +02:00
|
|
|
sub show_commit_changed_paths {
|
|
|
|
my ($c) = @_;
|
|
|
|
return unless $c->{changed};
|
|
|
|
print "Changed paths:\n", @{$c->{changed}};
|
|
|
|
}
|
|
|
|
|
2006-06-01 11:35:44 +02:00
|
|
|
sub show_commit_normal {
|
|
|
|
my ($c) = @_;
|
|
|
|
print '-' x72, "\nr$c->{r} | ";
|
|
|
|
print "$c->{c} | " if $_show_commit;
|
|
|
|
print "$c->{a} | ", strftime("%Y-%m-%d %H:%M:%S %z (%a, %d %b %Y)",
|
|
|
|
localtime($c->{t_utc})), ' | ';
|
|
|
|
my $nr_line = 0;
|
|
|
|
|
|
|
|
if (my $l = $c->{l}) {
|
2006-10-11 20:53:22 +02:00
|
|
|
while ($l->[$#$l] eq "\n" && $#$l > 0
|
|
|
|
&& $l->[($#$l - 1)] eq "\n") {
|
2006-06-01 11:35:44 +02:00
|
|
|
pop @$l;
|
|
|
|
}
|
|
|
|
$nr_line = scalar @$l;
|
|
|
|
if (!$nr_line) {
|
|
|
|
print "1 line\n\n\n";
|
|
|
|
} else {
|
|
|
|
if ($nr_line == 1) {
|
|
|
|
$nr_line = '1 line';
|
|
|
|
} else {
|
|
|
|
$nr_line .= ' lines';
|
|
|
|
}
|
2006-10-11 20:53:22 +02:00
|
|
|
print $nr_line, "\n";
|
|
|
|
show_commit_changed_paths($c);
|
|
|
|
print "\n";
|
2006-06-01 11:35:44 +02:00
|
|
|
print $_ foreach @$l;
|
|
|
|
}
|
|
|
|
} else {
|
2006-10-11 20:53:22 +02:00
|
|
|
print "1 line\n";
|
|
|
|
show_commit_changed_paths($c);
|
|
|
|
print "\n";
|
2006-06-01 11:35:44 +02:00
|
|
|
|
|
|
|
}
|
|
|
|
foreach my $x (qw/raw diff/) {
|
|
|
|
if ($c->{$x}) {
|
|
|
|
print "\n";
|
|
|
|
print $_ foreach @{$c->{$x}}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-11-24 10:38:04 +01:00
|
|
|
sub _simple_prompt {
|
|
|
|
my ($cred, $realm, $default_username, $may_save, $pool) = @_;
|
|
|
|
$may_save = undef if $_no_auth_cache;
|
|
|
|
$default_username = $_username if defined $_username;
|
|
|
|
if (defined $default_username && length $default_username) {
|
|
|
|
if (defined $realm && length $realm) {
|
|
|
|
print "Authentication realm: $realm\n";
|
|
|
|
}
|
|
|
|
$cred->username($default_username);
|
|
|
|
} else {
|
|
|
|
_username_prompt($cred, $realm, $may_save, $pool);
|
|
|
|
}
|
|
|
|
$cred->password(_read_password("Password for '" .
|
|
|
|
$cred->username . "': ", $realm));
|
|
|
|
$cred->may_save($may_save);
|
|
|
|
$SVN::_Core::SVN_NO_ERROR;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub _ssl_server_trust_prompt {
|
|
|
|
my ($cred, $realm, $failures, $cert_info, $may_save, $pool) = @_;
|
|
|
|
$may_save = undef if $_no_auth_cache;
|
|
|
|
print "Error validating server certificate for '$realm':\n";
|
|
|
|
if ($failures & $SVN::Auth::SSL::UNKNOWNCA) {
|
|
|
|
print " - The certificate is not issued by a trusted ",
|
|
|
|
"authority. Use the\n",
|
|
|
|
" fingerprint to validate the certificate manually!\n";
|
|
|
|
}
|
|
|
|
if ($failures & $SVN::Auth::SSL::CNMISMATCH) {
|
|
|
|
print " - The certificate hostname does not match.\n";
|
|
|
|
}
|
|
|
|
if ($failures & $SVN::Auth::SSL::NOTYETVALID) {
|
|
|
|
print " - The certificate is not yet valid.\n";
|
|
|
|
}
|
|
|
|
if ($failures & $SVN::Auth::SSL::EXPIRED) {
|
|
|
|
print " - The certificate has expired.\n";
|
|
|
|
}
|
|
|
|
if ($failures & $SVN::Auth::SSL::OTHER) {
|
|
|
|
print " - The certificate has an unknown error.\n";
|
|
|
|
}
|
|
|
|
printf( "Certificate information:\n".
|
|
|
|
" - Hostname: %s\n".
|
|
|
|
" - Valid: from %s until %s\n".
|
|
|
|
" - Issuer: %s\n".
|
|
|
|
" - Fingerprint: %s\n",
|
|
|
|
map $cert_info->$_, qw(hostname valid_from valid_until
|
|
|
|
issuer_dname fingerprint) );
|
|
|
|
my $choice;
|
|
|
|
prompt:
|
|
|
|
print $may_save ?
|
|
|
|
"(R)eject, accept (t)emporarily or accept (p)ermanently? " :
|
|
|
|
"(R)eject or accept (t)emporarily? ";
|
|
|
|
$choice = lc(substr(<STDIN> || 'R', 0, 1));
|
|
|
|
if ($choice =~ /^t$/i) {
|
|
|
|
$cred->may_save(undef);
|
|
|
|
} elsif ($choice =~ /^r$/i) {
|
|
|
|
return -1;
|
|
|
|
} elsif ($may_save && $choice =~ /^p$/i) {
|
|
|
|
$cred->may_save($may_save);
|
|
|
|
} else {
|
|
|
|
goto prompt;
|
|
|
|
}
|
|
|
|
$cred->accepted_failures($failures);
|
|
|
|
$SVN::_Core::SVN_NO_ERROR;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub _ssl_client_cert_prompt {
|
|
|
|
my ($cred, $realm, $may_save, $pool) = @_;
|
|
|
|
$may_save = undef if $_no_auth_cache;
|
|
|
|
print "Client certificate filename: ";
|
|
|
|
chomp(my $filename = <STDIN>);
|
|
|
|
$cred->cert_file($filename);
|
|
|
|
$cred->may_save($may_save);
|
|
|
|
$SVN::_Core::SVN_NO_ERROR;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub _ssl_client_cert_pw_prompt {
|
|
|
|
my ($cred, $realm, $may_save, $pool) = @_;
|
|
|
|
$may_save = undef if $_no_auth_cache;
|
|
|
|
$cred->password(_read_password("Password: ", $realm));
|
|
|
|
$cred->may_save($may_save);
|
|
|
|
$SVN::_Core::SVN_NO_ERROR;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub _username_prompt {
|
|
|
|
my ($cred, $realm, $may_save, $pool) = @_;
|
|
|
|
$may_save = undef if $_no_auth_cache;
|
|
|
|
if (defined $realm && length $realm) {
|
|
|
|
print "Authentication realm: $realm\n";
|
|
|
|
}
|
|
|
|
my $username;
|
|
|
|
if (defined $_username) {
|
|
|
|
$username = $_username;
|
|
|
|
} else {
|
|
|
|
print "Username: ";
|
|
|
|
chomp($username = <STDIN>);
|
|
|
|
}
|
|
|
|
$cred->username($username);
|
|
|
|
$cred->may_save($may_save);
|
|
|
|
$SVN::_Core::SVN_NO_ERROR;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub _read_password {
|
|
|
|
my ($prompt, $realm) = @_;
|
|
|
|
print $prompt;
|
|
|
|
require Term::ReadKey;
|
|
|
|
Term::ReadKey::ReadMode('noecho');
|
|
|
|
my $password = '';
|
|
|
|
while (defined(my $key = Term::ReadKey::ReadKey(0))) {
|
|
|
|
last if $key =~ /[\012\015]/; # \n\r
|
|
|
|
$password .= $key;
|
|
|
|
}
|
|
|
|
Term::ReadKey::ReadMode('restore');
|
|
|
|
print "\n";
|
|
|
|
$password;
|
|
|
|
}
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
sub libsvn_connect {
|
|
|
|
my ($url) = @_;
|
2006-11-25 07:38:17 +01:00
|
|
|
SVN::_Core::svn_config_ensure($_config_dir, undef);
|
|
|
|
my ($baton, $callbacks) = SVN::Core::auth_open_helper([
|
|
|
|
SVN::Client::get_simple_provider(),
|
|
|
|
SVN::Client::get_ssl_server_trust_file_provider(),
|
|
|
|
SVN::Client::get_simple_prompt_provider(
|
|
|
|
\&_simple_prompt, 2),
|
|
|
|
SVN::Client::get_ssl_client_cert_prompt_provider(
|
|
|
|
\&_ssl_client_cert_prompt, 2),
|
|
|
|
SVN::Client::get_ssl_client_cert_pw_prompt_provider(
|
|
|
|
\&_ssl_client_cert_pw_prompt, 2),
|
|
|
|
SVN::Client::get_username_provider(),
|
|
|
|
SVN::Client::get_ssl_server_trust_prompt_provider(
|
|
|
|
\&_ssl_server_trust_prompt),
|
|
|
|
SVN::Client::get_username_prompt_provider(
|
|
|
|
\&_username_prompt, 2),
|
|
|
|
]);
|
2006-11-27 22:20:53 +01:00
|
|
|
my $config = SVN::Core::config_get_config($_config_dir);
|
2006-11-25 07:38:17 +01:00
|
|
|
my $ra = SVN::Ra->new(url => $url, auth => $baton,
|
2006-11-27 22:20:53 +01:00
|
|
|
config => $config,
|
2006-11-25 07:38:17 +01:00
|
|
|
pool => SVN::Pool->new,
|
|
|
|
auth_provider_callbacks => $callbacks);
|
2006-11-28 06:44:48 +01:00
|
|
|
|
|
|
|
my $df = $ENV{GIT_SVN_DELTA_FETCH};
|
|
|
|
if (defined $df) {
|
|
|
|
$_xfer_delta = $df;
|
|
|
|
} else {
|
|
|
|
$_xfer_delta = ($url =~ m#^file://#) ? undef : 1;
|
|
|
|
}
|
2006-11-25 07:38:17 +01:00
|
|
|
$ra->{svn_path} = $url;
|
|
|
|
$ra->{repos_root} = $ra->get_repos_root;
|
|
|
|
$ra->{svn_path} =~ s#^\Q$ra->{repos_root}\E/*##;
|
|
|
|
push @repo_path_split_cache, qr/^(\Q$ra->{repos_root}\E)/;
|
|
|
|
return $ra;
|
|
|
|
}
|
|
|
|
|
2006-12-08 11:20:17 +01:00
|
|
|
sub libsvn_can_do_switch {
|
|
|
|
unless (defined $_svn_can_do_switch) {
|
|
|
|
my $pool = SVN::Pool->new;
|
|
|
|
my $rep = eval {
|
|
|
|
$SVN->do_switch(1, '', 0, $SVN->{url},
|
|
|
|
SVN::Delta::Editor->new, $pool);
|
|
|
|
};
|
|
|
|
if ($@) {
|
|
|
|
$_svn_can_do_switch = 0;
|
|
|
|
} else {
|
|
|
|
$rep->abort_report($pool);
|
|
|
|
$_svn_can_do_switch = 1;
|
|
|
|
}
|
|
|
|
$pool->clear;
|
|
|
|
}
|
|
|
|
$_svn_can_do_switch;
|
|
|
|
}
|
|
|
|
|
2006-11-25 07:38:17 +01:00
|
|
|
sub libsvn_dup_ra {
|
|
|
|
my ($ra) = @_;
|
2006-11-27 22:20:53 +01:00
|
|
|
SVN::Ra->new(map { $_ => $ra->{$_} } qw/config url
|
|
|
|
auth auth_provider_callbacks repos_root svn_path/);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
sub libsvn_get_file {
|
2006-12-12 23:47:00 +01:00
|
|
|
my ($gui, $f, $rev, $chg, $untracked) = @_;
|
2006-11-25 07:38:17 +01:00
|
|
|
$f =~ s#^/##;
|
2006-11-05 06:51:10 +01:00
|
|
|
print "\t$chg\t$f\n" unless $_q;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
|
2006-06-15 22:36:12 +02:00
|
|
|
my ($hash, $pid, $in, $out);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
my $pool = SVN::Pool->new;
|
2006-06-15 22:36:12 +02:00
|
|
|
defined($pid = open3($in, $out, '>&STDERR',
|
|
|
|
qw/git-hash-object -w --stdin/)) or croak $!;
|
git-svn: SVN 1.1.x library compatibility
Tested on a plain Ubuntu Hoary installation
using subversion 1.1.1-2ubuntu3
1.1.x issues I had to deal with:
* Avoid the noisy command-line client compatibility check if we
use the libraries.
* get_log() arguments differ (now using a nice wrapper from
Junio's suggestion)
* get_file() is picky about what kind of file handles it gets,
so I ended up redirecting STDOUT. I'm probably overflushing
my file handles, but that's the safest thing to do...
* BDB kept segfaulting on me during tests, so svnadmin will use FSFS
whenever we can.
* If somebody used an expanded CVS $Id$ line inside a file, then
propsetting it to use svn:keywords will cause the original CVS
$Id$ to be retained when asked for the original file. As far as
I can see, this is a server-side issue. We won't care in the
test anymore, as long as it's not expanded by SVN, a static
CVS $Id$ line is fine.
While we're at making ourselves more compatible, avoid grep
along with the -q flag, which is GNU-specific. (grep avoidance
tip from Junio, too)
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 12:07:14 +02:00
|
|
|
# redirect STDOUT for SVN 1.1.x compatibility
|
|
|
|
open my $stdout, '>&', \*STDOUT or croak $!;
|
|
|
|
open STDOUT, '>&', $in or croak $!;
|
|
|
|
my ($r, $props) = $SVN->get_file($f, $rev, \*STDOUT, $pool);
|
2006-06-15 22:36:12 +02:00
|
|
|
$in->flush == 0 or croak $!;
|
git-svn: SVN 1.1.x library compatibility
Tested on a plain Ubuntu Hoary installation
using subversion 1.1.1-2ubuntu3
1.1.x issues I had to deal with:
* Avoid the noisy command-line client compatibility check if we
use the libraries.
* get_log() arguments differ (now using a nice wrapper from
Junio's suggestion)
* get_file() is picky about what kind of file handles it gets,
so I ended up redirecting STDOUT. I'm probably overflushing
my file handles, but that's the safest thing to do...
* BDB kept segfaulting on me during tests, so svnadmin will use FSFS
whenever we can.
* If somebody used an expanded CVS $Id$ line inside a file, then
propsetting it to use svn:keywords will cause the original CVS
$Id$ to be retained when asked for the original file. As far as
I can see, this is a server-side issue. We won't care in the
test anymore, as long as it's not expanded by SVN, a static
CVS $Id$ line is fine.
While we're at making ourselves more compatible, avoid grep
along with the -q flag, which is GNU-specific. (grep avoidance
tip from Junio, too)
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 12:07:14 +02:00
|
|
|
open STDOUT, '>&', $stdout or croak $!;
|
2006-06-15 22:36:12 +02:00
|
|
|
close $in or croak $!;
|
git-svn: SVN 1.1.x library compatibility
Tested on a plain Ubuntu Hoary installation
using subversion 1.1.1-2ubuntu3
1.1.x issues I had to deal with:
* Avoid the noisy command-line client compatibility check if we
use the libraries.
* get_log() arguments differ (now using a nice wrapper from
Junio's suggestion)
* get_file() is picky about what kind of file handles it gets,
so I ended up redirecting STDOUT. I'm probably overflushing
my file handles, but that's the safest thing to do...
* BDB kept segfaulting on me during tests, so svnadmin will use FSFS
whenever we can.
* If somebody used an expanded CVS $Id$ line inside a file, then
propsetting it to use svn:keywords will cause the original CVS
$Id$ to be retained when asked for the original file. As far as
I can see, this is a server-side issue. We won't care in the
test anymore, as long as it's not expanded by SVN, a static
CVS $Id$ line is fine.
While we're at making ourselves more compatible, avoid grep
along with the -q flag, which is GNU-specific. (grep avoidance
tip from Junio, too)
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 12:07:14 +02:00
|
|
|
close $stdout or croak $!;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
$pool->clear;
|
2006-06-15 22:36:12 +02:00
|
|
|
chomp($hash = do { local $/; <$out> });
|
|
|
|
close $out or croak $!;
|
|
|
|
waitpid $pid, 0;
|
|
|
|
$hash =~ /^$sha1$/o or die "not a sha1: $hash\n";
|
|
|
|
|
|
|
|
my $mode = exists $props->{'svn:executable'} ? '100755' : '100644';
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
if (exists $props->{'svn:special'}) {
|
|
|
|
$mode = '120000';
|
2006-12-15 19:59:54 +01:00
|
|
|
my $link = `git-cat-file blob $hash`; # no chomping symlinks
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
$link =~ s/^link // or die "svn:special file with contents: <",
|
|
|
|
$link, "> is not understood\n";
|
2006-06-15 22:36:12 +02:00
|
|
|
defined($pid = open3($in, $out, '>&STDERR',
|
|
|
|
qw/git-hash-object -w --stdin/)) or croak $!;
|
|
|
|
print $in $link;
|
|
|
|
$in->flush == 0 or croak $!;
|
|
|
|
close $in or croak $!;
|
|
|
|
chomp($hash = do { local $/; <$out> });
|
|
|
|
close $out or croak $!;
|
|
|
|
waitpid $pid, 0;
|
|
|
|
$hash =~ /^$sha1$/o or die "not a sha1: $hash\n";
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
}
|
2006-12-12 23:47:00 +01:00
|
|
|
%{$untracked->{file_prop}->{$f}} = %$props;
|
2006-11-25 07:38:17 +01:00
|
|
|
print $gui $mode,' ',$hash,"\t",$f,"\0" or croak $!;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
}
|
|
|
|
|
2006-12-12 23:47:00 +01:00
|
|
|
sub uri_encode {
|
|
|
|
my ($f) = @_;
|
|
|
|
$f =~ s#([^a-zA-Z0-9\*!\:_\./\-])#uc sprintf("%%%02x",ord($1))#eg;
|
|
|
|
$f
|
|
|
|
}
|
|
|
|
|
|
|
|
sub uri_decode {
|
|
|
|
my ($f) = @_;
|
|
|
|
$f =~ tr/+/ /;
|
|
|
|
$f =~ s/%([A-F0-9]{2})/chr hex($1)/ge;
|
|
|
|
$f
|
|
|
|
}
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
sub libsvn_log_entry {
|
2006-12-12 23:47:00 +01:00
|
|
|
my ($rev, $author, $date, $msg, $parents, $untracked) = @_;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
my ($Y,$m,$d,$H,$M,$S) = ($date =~ /^(\d{4})\-(\d\d)\-(\d\d)T
|
|
|
|
(\d\d)\:(\d\d)\:(\d\d).\d+Z$/x)
|
|
|
|
or die "Unable to parse date: $date\n";
|
2006-12-12 05:25:58 +01:00
|
|
|
if (defined $author && length $author > 0 &&
|
|
|
|
defined $_authors && ! defined $users{$author}) {
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
die "Author: $author not defined in $_authors file\n";
|
|
|
|
}
|
2006-08-11 20:11:29 +02:00
|
|
|
$msg = '' if ($rev == 0 && !defined $msg);
|
2006-12-12 23:47:00 +01:00
|
|
|
|
|
|
|
open my $un, '>>', "$GIT_SVN_DIR/unhandled.log" or croak $!;
|
|
|
|
my $h;
|
|
|
|
print $un "r$rev\n" or croak $!;
|
|
|
|
$h = $untracked->{empty};
|
|
|
|
foreach (sort keys %$h) {
|
|
|
|
my $act = $h->{$_} ? '+empty_dir' : '-empty_dir';
|
|
|
|
print $un " $act: ", uri_encode($_), "\n" or croak $!;
|
|
|
|
warn "W: $act: $_\n";
|
|
|
|
}
|
|
|
|
foreach my $t (qw/dir_prop file_prop/) {
|
|
|
|
$h = $untracked->{$t} or next;
|
|
|
|
foreach my $path (sort keys %$h) {
|
|
|
|
my $ppath = $path eq '' ? '.' : $path;
|
|
|
|
foreach my $prop (sort keys %{$h->{$path}}) {
|
|
|
|
next if $SKIP{$prop};
|
|
|
|
my $v = $h->{$path}->{$prop};
|
|
|
|
if (defined $v) {
|
|
|
|
print $un " +$t: ",
|
|
|
|
uri_encode($ppath), ' ',
|
|
|
|
uri_encode($prop), ' ',
|
|
|
|
uri_encode($v), "\n"
|
|
|
|
or croak $!;
|
|
|
|
} else {
|
|
|
|
print $un " -$t: ",
|
|
|
|
uri_encode($ppath), ' ',
|
|
|
|
uri_encode($prop), "\n"
|
|
|
|
or croak $!;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
foreach my $t (qw/absent_file absent_directory/) {
|
|
|
|
$h = $untracked->{$t} or next;
|
|
|
|
foreach my $parent (sort keys %$h) {
|
|
|
|
foreach my $path (sort @{$h->{$parent}}) {
|
|
|
|
print $un " $t: ",
|
|
|
|
uri_encode("$parent/$path"), "\n"
|
|
|
|
or croak $!;
|
|
|
|
warn "W: $t: $parent/$path ",
|
|
|
|
"Insufficient permissions?\n";
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
# revprops (make this optional? it's an extra network trip...)
|
|
|
|
my $pool = SVN::Pool->new;
|
|
|
|
my $rp = $SVN->rev_proplist($rev, $pool);
|
|
|
|
foreach (sort keys %$rp) {
|
|
|
|
next if /^svn:(?:author|date|log)$/;
|
|
|
|
print $un " rev_prop: ", uri_encode($_), ' ',
|
|
|
|
uri_encode($rp->{$_}), "\n";
|
|
|
|
}
|
|
|
|
$pool->clear;
|
|
|
|
close $un or croak $!;
|
|
|
|
|
|
|
|
{ revision => $rev, date => "+0000 $Y-$m-$d $H:$M:$S",
|
|
|
|
author => $author, msg => $msg."\n", parents => $parents || [],
|
|
|
|
revprops => $rp }
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
sub process_rm {
|
2006-11-28 11:50:17 +01:00
|
|
|
my ($gui, $last_commit, $f, $q) = @_;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
# remove entire directories.
|
2006-12-15 19:59:54 +01:00
|
|
|
if (command('ls-tree',$last_commit,'--',$f) =~ /^040000 tree/) {
|
|
|
|
my ($ls, $ctx) = command_output_pipe(qw/ls-tree
|
|
|
|
-r --name-only -z/,
|
|
|
|
$last_commit,'--',$f);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
local $/ = "\0";
|
|
|
|
while (<$ls>) {
|
|
|
|
print $gui '0 ',0 x 40,"\t",$_ or croak $!;
|
2006-11-28 11:50:17 +01:00
|
|
|
print "\tD\t$_\n" unless $q;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
}
|
2006-11-28 11:50:17 +01:00
|
|
|
print "\tD\t$f/\n" unless $q;
|
2006-12-15 19:59:54 +01:00
|
|
|
command_close_pipe($ls, $ctx);
|
2006-12-12 23:47:00 +01:00
|
|
|
return $SVN::Node::dir;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
} else {
|
|
|
|
print $gui '0 ',0 x 40,"\t",$f,"\0" or croak $!;
|
2006-11-28 11:50:17 +01:00
|
|
|
print "\tD\t$f\n" unless $q;
|
2006-12-12 23:47:00 +01:00
|
|
|
return $SVN::Node::file;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
sub libsvn_fetch {
|
2006-11-28 06:44:48 +01:00
|
|
|
$_xfer_delta ? libsvn_fetch_delta(@_) : libsvn_fetch_full(@_);
|
|
|
|
}
|
|
|
|
|
|
|
|
sub libsvn_fetch_delta {
|
|
|
|
my ($last_commit, $paths, $rev, $author, $date, $msg) = @_;
|
|
|
|
my $pool = SVN::Pool->new;
|
2006-11-28 11:50:17 +01:00
|
|
|
my $ed = SVN::Git::Fetcher->new({ c => $last_commit, q => $_q });
|
2006-11-28 06:44:48 +01:00
|
|
|
my $reporter = $SVN->do_update($rev, '', 1, $ed, $pool);
|
|
|
|
my @lock = $SVN::Core::VERSION ge '1.2.0' ? (undef) : ();
|
|
|
|
my (undef, $last_rev, undef) = cmt_metadata($last_commit);
|
|
|
|
$reporter->set_path('', $last_rev, 0, @lock, $pool);
|
|
|
|
$reporter->finish_report($pool);
|
|
|
|
$pool->clear;
|
2006-11-28 23:06:05 +01:00
|
|
|
unless ($ed->{git_commit_ok}) {
|
|
|
|
die "SVN connection failed somewhere...\n";
|
|
|
|
}
|
2006-12-12 23:47:00 +01:00
|
|
|
libsvn_log_entry($rev, $author, $date, $msg, [$last_commit], $ed);
|
2006-11-28 06:44:48 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
sub libsvn_fetch_full {
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
my ($last_commit, $paths, $rev, $author, $date, $msg) = @_;
|
2006-12-15 19:59:54 +01:00
|
|
|
my ($gui, $ctx) = command_input_pipe(qw/update-index -z --index-info/);
|
2006-12-03 01:19:31 +01:00
|
|
|
my %amr;
|
2006-12-12 23:47:00 +01:00
|
|
|
my $ut = { empty => {}, dir_prop => {}, file_prop => {} };
|
2006-11-25 07:38:17 +01:00
|
|
|
my $p = $SVN->{svn_path};
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
foreach my $f (keys %$paths) {
|
|
|
|
my $m = $paths->{$f}->action();
|
2006-11-26 02:38:41 +01:00
|
|
|
if (length $p) {
|
|
|
|
$f =~ s#^/\Q$p\E/##;
|
|
|
|
next if $f =~ m#^/#;
|
|
|
|
} else {
|
|
|
|
$f =~ s#^/##;
|
|
|
|
}
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
if ($m =~ /^[DR]$/) {
|
2006-12-12 23:47:00 +01:00
|
|
|
my $t = process_rm($gui, $last_commit, $f, $_q);
|
|
|
|
if ($m eq 'D') {
|
|
|
|
$ut->{empty}->{$f} = 0 if $t == $SVN::Node::dir;
|
|
|
|
next;
|
|
|
|
}
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
# 'R' can be file replacements, too, right?
|
|
|
|
}
|
|
|
|
my $pool = SVN::Pool->new;
|
|
|
|
my $t = $SVN->check_path($f, $rev, $pool);
|
|
|
|
if ($t == $SVN::Node::file) {
|
|
|
|
if ($m =~ /^[AMR]$/) {
|
2006-12-03 01:19:31 +01:00
|
|
|
$amr{$f} = $m;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
} else {
|
|
|
|
die "Unrecognized action: $m, ($f r$rev)\n";
|
|
|
|
}
|
2006-07-20 10:43:01 +02:00
|
|
|
} elsif ($t == $SVN::Node::dir && $m =~ /^[AR]$/) {
|
|
|
|
my @traversed = ();
|
2006-12-12 23:47:00 +01:00
|
|
|
libsvn_traverse($gui, '', $f, $rev, \@traversed, $ut);
|
|
|
|
if (@traversed) {
|
|
|
|
foreach (@traversed) {
|
|
|
|
$amr{$_} = $m;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
my ($dir, $file) = ($f =~ m#^(.*?)/?([^/]+)$#);
|
|
|
|
delete $ut->{empty}->{$dir};
|
|
|
|
$ut->{empty}->{$f} = 1;
|
2006-07-20 10:43:01 +02:00
|
|
|
}
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
}
|
|
|
|
$pool->clear;
|
|
|
|
}
|
2006-12-03 01:19:31 +01:00
|
|
|
foreach (keys %amr) {
|
2006-12-12 23:47:00 +01:00
|
|
|
libsvn_get_file($gui, $_, $rev, $amr{$_}, $ut);
|
|
|
|
my ($d) = ($_ =~ m#^(.*?)/?(?:[^/]+)$#);
|
|
|
|
delete $ut->{empty}->{$d};
|
|
|
|
}
|
|
|
|
unless (exists $ut->{dir_prop}->{''}) {
|
|
|
|
my $pool = SVN::Pool->new;
|
|
|
|
my (undef, undef, $props) = $SVN->get_dir('', $rev, $pool);
|
|
|
|
%{$ut->{dir_prop}->{''}} = %$props;
|
|
|
|
$pool->clear;
|
2006-06-28 04:39:14 +02:00
|
|
|
}
|
2006-12-15 19:59:54 +01:00
|
|
|
command_close_pipe($gui, $ctx);
|
2006-12-12 23:47:00 +01:00
|
|
|
libsvn_log_entry($rev, $author, $date, $msg, [$last_commit], $ut);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
sub svn_grab_base_rev {
|
2006-12-15 19:59:54 +01:00
|
|
|
my $c = eval { command_oneline([qw/rev-parse --verify/,
|
|
|
|
"refs/remotes/$GIT_SVN^0"],
|
|
|
|
{ STDERR => 0 }) };
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
if (defined $c && length $c) {
|
2006-06-28 04:39:11 +02:00
|
|
|
my ($url, $rev, $uuid) = cmt_metadata($c);
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 04:39:13 +02:00
|
|
|
return ($rev, $c) if defined $rev;
|
|
|
|
}
|
|
|
|
if ($_no_metadata) {
|
|
|
|
my $offset = -41; # from tail
|
|
|
|
my $rl;
|
|
|
|
open my $fh, '<', $REVDB or
|
|
|
|
die "--no-metadata specified and $REVDB not readable\n";
|
|
|
|
seek $fh, $offset, 2;
|
|
|
|
$rl = readline $fh;
|
|
|
|
defined $rl or return (undef, undef);
|
|
|
|
chomp $rl;
|
|
|
|
while ($c ne $rl && tell $fh != 0) {
|
|
|
|
$offset -= 41;
|
|
|
|
seek $fh, $offset, 2;
|
|
|
|
$rl = readline $fh;
|
|
|
|
defined $rl or return (undef, undef);
|
|
|
|
chomp $rl;
|
|
|
|
}
|
|
|
|
my $rev = tell $fh;
|
|
|
|
croak $! if ($rev < -1);
|
|
|
|
$rev = ($rev - 41) / 41;
|
|
|
|
close $fh or croak $!;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
return ($rev, $c);
|
|
|
|
}
|
|
|
|
return (undef, undef);
|
|
|
|
}
|
|
|
|
|
|
|
|
sub libsvn_parse_revision {
|
|
|
|
my $base = shift;
|
|
|
|
my $head = $SVN->get_latest_revnum();
|
|
|
|
if (!defined $_revision || $_revision eq 'BASE:HEAD') {
|
|
|
|
return ($base + 1, $head) if (defined $base);
|
|
|
|
return (0, $head);
|
|
|
|
}
|
|
|
|
return ($1, $2) if ($_revision =~ /^(\d+):(\d+)$/);
|
|
|
|
return ($_revision, $_revision) if ($_revision =~ /^\d+$/);
|
|
|
|
if ($_revision =~ /^BASE:(\d+)$/) {
|
|
|
|
return ($base + 1, $1) if (defined $base);
|
|
|
|
return (0, $head);
|
|
|
|
}
|
|
|
|
return ($1, $head) if ($_revision =~ /^(\d+):HEAD$/);
|
|
|
|
die "revision argument: $_revision not understood by git-svn\n",
|
|
|
|
"Try using the command-line svn client instead\n";
|
|
|
|
}
|
|
|
|
|
|
|
|
sub libsvn_traverse {
|
2006-12-12 23:47:00 +01:00
|
|
|
my ($gui, $pfx, $path, $rev, $files, $untracked) = @_;
|
2006-11-25 07:38:17 +01:00
|
|
|
my $cwd = length $pfx ? "$pfx/$path" : $path;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
my $pool = SVN::Pool->new;
|
2006-11-25 07:38:17 +01:00
|
|
|
$cwd =~ s#^\Q$SVN->{svn_path}\E##;
|
2006-12-12 23:47:00 +01:00
|
|
|
my $nr = 0;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
my ($dirent, $r, $props) = $SVN->get_dir($cwd, $rev, $pool);
|
2006-12-12 23:47:00 +01:00
|
|
|
%{$untracked->{dir_prop}->{$cwd}} = %$props;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
foreach my $d (keys %$dirent) {
|
|
|
|
my $t = $dirent->{$d}->kind;
|
|
|
|
if ($t == $SVN::Node::dir) {
|
2006-12-12 23:47:00 +01:00
|
|
|
my $i = libsvn_traverse($gui, $cwd, $d, $rev,
|
|
|
|
$files, $untracked);
|
|
|
|
if ($i) {
|
|
|
|
$nr += $i;
|
|
|
|
} else {
|
|
|
|
$untracked->{empty}->{"$cwd/$d"} = 1;
|
|
|
|
}
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
} elsif ($t == $SVN::Node::file) {
|
2006-12-12 23:47:00 +01:00
|
|
|
$nr++;
|
2006-07-20 10:43:01 +02:00
|
|
|
my $file = "$cwd/$d";
|
|
|
|
if (defined $files) {
|
|
|
|
push @$files, $file;
|
|
|
|
} else {
|
2006-12-12 23:47:00 +01:00
|
|
|
libsvn_get_file($gui, $file, $rev, 'A',
|
|
|
|
$untracked);
|
|
|
|
my ($dir) = ($file =~ m#^(.*?)/?(?:[^/]+)$#);
|
|
|
|
delete $untracked->{empty}->{$dir};
|
2006-07-20 10:43:01 +02:00
|
|
|
}
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
$pool->clear;
|
2006-12-12 23:47:00 +01:00
|
|
|
$nr;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
sub libsvn_traverse_ignore {
|
|
|
|
my ($fh, $path, $r) = @_;
|
|
|
|
$path =~ s#^/+##g;
|
|
|
|
my $pool = SVN::Pool->new;
|
|
|
|
my ($dirent, undef, $props) = $SVN->get_dir($path, $r, $pool);
|
|
|
|
my $p = $path;
|
2006-11-25 07:38:17 +01:00
|
|
|
$p =~ s#^\Q$SVN->{svn_path}\E/##;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
print $fh length $p ? "\n# $p\n" : "\n# /\n";
|
|
|
|
if (my $s = $props->{'svn:ignore'}) {
|
|
|
|
$s =~ s/[\r\n]+/\n/g;
|
|
|
|
chomp $s;
|
|
|
|
if (length $p == 0) {
|
|
|
|
$s =~ s#\n#\n/$p#g;
|
|
|
|
print $fh "/$s\n";
|
|
|
|
} else {
|
|
|
|
$s =~ s#\n#\n/$p/#g;
|
|
|
|
print $fh "/$p/$s\n";
|
|
|
|
}
|
|
|
|
}
|
|
|
|
foreach (sort keys %$dirent) {
|
|
|
|
next if $dirent->{$_}->kind != $SVN::Node::dir;
|
|
|
|
libsvn_traverse_ignore($fh, "$path/$_", $r);
|
|
|
|
}
|
|
|
|
$pool->clear;
|
|
|
|
}
|
|
|
|
|
2006-06-13 13:02:23 +02:00
|
|
|
sub revisions_eq {
|
|
|
|
my ($path, $r0, $r1) = @_;
|
|
|
|
return 1 if $r0 == $r1;
|
|
|
|
my $nr = 0;
|
2006-12-16 08:58:07 +01:00
|
|
|
# should be OK to use Pool here (r1 - r0) should be small
|
|
|
|
my $pool = SVN::Pool->new;
|
|
|
|
libsvn_get_log($SVN, [$path], $r0, $r1,
|
|
|
|
0, 0, 1, sub {$nr++}, $pool);
|
|
|
|
$pool->clear;
|
2006-06-13 13:02:23 +02:00
|
|
|
return 0 if ($nr > 1);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub libsvn_find_parent_branch {
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
my ($paths, $rev, $author, $date, $msg) = @_;
|
2006-11-25 07:38:17 +01:00
|
|
|
my $svn_path = '/'.$SVN->{svn_path};
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
|
|
|
|
# look for a parent from another branch:
|
2006-06-15 21:50:12 +02:00
|
|
|
my $i = $paths->{$svn_path} or return;
|
|
|
|
my $branch_from = $i->copyfrom_path or return;
|
|
|
|
my $r = $i->copyfrom_rev;
|
|
|
|
print STDERR "Found possible branch point: ",
|
|
|
|
"$branch_from => $svn_path, $r\n";
|
|
|
|
$branch_from =~ s#^/##;
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 04:39:13 +02:00
|
|
|
my $l_map = {};
|
|
|
|
read_url_paths_all($l_map, '', "$GIT_DIR/svn");
|
2006-11-25 07:38:17 +01:00
|
|
|
my $url = $SVN->{repos_root};
|
2006-06-15 21:50:12 +02:00
|
|
|
defined $l_map->{$url} or return;
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 04:39:13 +02:00
|
|
|
my $id = $l_map->{$url}->{$branch_from};
|
|
|
|
if (!defined $id && $_follow_parent) {
|
|
|
|
print STDERR "Following parent: $branch_from\@$r\n";
|
|
|
|
# auto create a new branch and follow it
|
|
|
|
$id = basename($branch_from);
|
|
|
|
$id .= '@'.$r if -r "$GIT_DIR/svn/$id";
|
|
|
|
while (-r "$GIT_DIR/svn/$id") {
|
|
|
|
# just grow a tail if we're not unique enough :x
|
|
|
|
$id .= '-';
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return unless defined $id;
|
|
|
|
|
2006-06-15 21:50:12 +02:00
|
|
|
my ($r0, $parent) = find_rev_before($r,$id,1);
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 04:39:13 +02:00
|
|
|
if ($_follow_parent && (!defined $r0 || !defined $parent)) {
|
|
|
|
defined(my $pid = fork) or croak $!;
|
|
|
|
if (!$pid) {
|
|
|
|
$GIT_SVN = $ENV{GIT_SVN_ID} = $id;
|
|
|
|
init_vars();
|
|
|
|
$SVN_URL = "$url/$branch_from";
|
2006-11-25 07:38:17 +01:00
|
|
|
$SVN = undef;
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 04:39:13 +02:00
|
|
|
setup_git_svn();
|
|
|
|
# we can't assume SVN_URL exists at r+1:
|
|
|
|
$_revision = "0:$r";
|
|
|
|
fetch_lib();
|
|
|
|
exit 0;
|
|
|
|
}
|
|
|
|
waitpid $pid, 0;
|
|
|
|
croak $? if $?;
|
|
|
|
($r0, $parent) = find_rev_before($r,$id,1);
|
|
|
|
}
|
2006-06-15 21:50:12 +02:00
|
|
|
return unless (defined $r0 && defined $parent);
|
|
|
|
if (revisions_eq($branch_from, $r0, $r)) {
|
|
|
|
unlink $GIT_SVN_INDEX;
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 04:39:13 +02:00
|
|
|
print STDERR "Found branch parent: ($GIT_SVN) $parent\n";
|
2006-12-15 19:59:54 +01:00
|
|
|
command_noisy('read-tree', $parent);
|
2006-12-08 11:20:17 +01:00
|
|
|
unless (libsvn_can_do_switch()) {
|
|
|
|
return libsvn_fetch_full($parent, $paths, $rev,
|
|
|
|
$author, $date, $msg);
|
|
|
|
}
|
|
|
|
# do_switch works with svn/trunk >= r22312, but that is not
|
|
|
|
# included with SVN 1.4.2 (the latest version at the moment),
|
|
|
|
# so we can't rely on it.
|
|
|
|
my $ra = libsvn_connect("$url/$branch_from");
|
2006-12-15 19:59:54 +01:00
|
|
|
my $ed = SVN::Git::Fetcher->new({c => $parent, q => $_q });
|
2006-12-08 11:20:17 +01:00
|
|
|
my $pool = SVN::Pool->new;
|
|
|
|
my $reporter = $ra->do_switch($rev, '', 1, $SVN->{url},
|
|
|
|
$ed, $pool);
|
|
|
|
my @lock = $SVN::Core::VERSION ge '1.2.0' ? (undef) : ();
|
|
|
|
$reporter->set_path('', $r0, 0, @lock, $pool);
|
|
|
|
$reporter->finish_report($pool);
|
|
|
|
$pool->clear;
|
|
|
|
unless ($ed->{git_commit_ok}) {
|
|
|
|
die "SVN connection failed somewhere...\n";
|
|
|
|
}
|
|
|
|
return libsvn_log_entry($rev, $author, $date, $msg, [$parent]);
|
2006-06-15 21:50:12 +02:00
|
|
|
}
|
|
|
|
print STDERR "Nope, branch point not imported or unknown\n";
|
2006-06-13 13:02:23 +02:00
|
|
|
return undef;
|
|
|
|
}
|
|
|
|
|
git-svn: SVN 1.1.x library compatibility
Tested on a plain Ubuntu Hoary installation
using subversion 1.1.1-2ubuntu3
1.1.x issues I had to deal with:
* Avoid the noisy command-line client compatibility check if we
use the libraries.
* get_log() arguments differ (now using a nice wrapper from
Junio's suggestion)
* get_file() is picky about what kind of file handles it gets,
so I ended up redirecting STDOUT. I'm probably overflushing
my file handles, but that's the safest thing to do...
* BDB kept segfaulting on me during tests, so svnadmin will use FSFS
whenever we can.
* If somebody used an expanded CVS $Id$ line inside a file, then
propsetting it to use svn:keywords will cause the original CVS
$Id$ to be retained when asked for the original file. As far as
I can see, this is a server-side issue. We won't care in the
test anymore, as long as it's not expanded by SVN, a static
CVS $Id$ line is fine.
While we're at making ourselves more compatible, avoid grep
along with the -q flag, which is GNU-specific. (grep avoidance
tip from Junio, too)
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 12:07:14 +02:00
|
|
|
sub libsvn_get_log {
|
|
|
|
my ($ra, @args) = @_;
|
2006-11-28 11:50:17 +01:00
|
|
|
$args[4]-- if $args[4] && $_xfer_delta && ! $_follow_parent;
|
git-svn: SVN 1.1.x library compatibility
Tested on a plain Ubuntu Hoary installation
using subversion 1.1.1-2ubuntu3
1.1.x issues I had to deal with:
* Avoid the noisy command-line client compatibility check if we
use the libraries.
* get_log() arguments differ (now using a nice wrapper from
Junio's suggestion)
* get_file() is picky about what kind of file handles it gets,
so I ended up redirecting STDOUT. I'm probably overflushing
my file handles, but that's the safest thing to do...
* BDB kept segfaulting on me during tests, so svnadmin will use FSFS
whenever we can.
* If somebody used an expanded CVS $Id$ line inside a file, then
propsetting it to use svn:keywords will cause the original CVS
$Id$ to be retained when asked for the original file. As far as
I can see, this is a server-side issue. We won't care in the
test anymore, as long as it's not expanded by SVN, a static
CVS $Id$ line is fine.
While we're at making ourselves more compatible, avoid grep
along with the -q flag, which is GNU-specific. (grep avoidance
tip from Junio, too)
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 12:07:14 +02:00
|
|
|
if ($SVN::Core::VERSION le '1.2.0') {
|
|
|
|
splice(@args, 3, 1);
|
|
|
|
}
|
|
|
|
$ra->get_log(@args);
|
|
|
|
}
|
|
|
|
|
2006-06-13 13:02:23 +02:00
|
|
|
sub libsvn_new_tree {
|
|
|
|
if (my $log_entry = libsvn_find_parent_branch(@_)) {
|
|
|
|
return $log_entry;
|
|
|
|
}
|
|
|
|
my ($paths, $rev, $author, $date, $msg) = @_;
|
2006-12-12 23:47:00 +01:00
|
|
|
my $ut;
|
2006-11-28 06:44:48 +01:00
|
|
|
if ($_xfer_delta) {
|
|
|
|
my $pool = SVN::Pool->new;
|
2006-11-28 11:50:17 +01:00
|
|
|
my $ed = SVN::Git::Fetcher->new({q => $_q});
|
2006-11-28 06:44:48 +01:00
|
|
|
my $reporter = $SVN->do_update($rev, '', 1, $ed, $pool);
|
|
|
|
my @lock = $SVN::Core::VERSION ge '1.2.0' ? (undef) : ();
|
|
|
|
$reporter->set_path('', $rev, 1, @lock, $pool);
|
|
|
|
$reporter->finish_report($pool);
|
|
|
|
$pool->clear;
|
2006-11-28 23:06:05 +01:00
|
|
|
unless ($ed->{git_commit_ok}) {
|
|
|
|
die "SVN connection failed somewhere...\n";
|
|
|
|
}
|
2006-12-12 23:47:00 +01:00
|
|
|
$ut = $ed;
|
2006-11-28 06:44:48 +01:00
|
|
|
} else {
|
2006-12-12 23:47:00 +01:00
|
|
|
$ut = { empty => {}, dir_prop => {}, file_prop => {} };
|
2006-12-15 19:59:54 +01:00
|
|
|
my ($gui, $ctx) = command_input_pipe(qw/update-index
|
|
|
|
-z --index-info/);
|
2006-12-12 23:47:00 +01:00
|
|
|
libsvn_traverse($gui, '', $SVN->{svn_path}, $rev, undef, $ut);
|
2006-12-15 19:59:54 +01:00
|
|
|
command_close_pipe($gui, $ctx);
|
2006-11-28 06:44:48 +01:00
|
|
|
}
|
2006-12-12 23:47:00 +01:00
|
|
|
libsvn_log_entry($rev, $author, $date, $msg, [], $ut);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
sub find_graft_path_commit {
|
|
|
|
my ($tree_paths, $p1, $r1) = @_;
|
|
|
|
foreach my $x (keys %$tree_paths) {
|
|
|
|
next unless ($p1 =~ /^\Q$x\E/);
|
|
|
|
my $i = $tree_paths->{$x};
|
2006-06-13 13:02:23 +02:00
|
|
|
my ($r0, $parent) = find_rev_before($r1,$i,1);
|
|
|
|
return $parent if (defined $r0 && $r0 == $r1);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
print STDERR "r$r1 of $i not imported\n";
|
|
|
|
next;
|
|
|
|
}
|
|
|
|
return undef;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub find_graft_path_parents {
|
|
|
|
my ($grafts, $tree_paths, $c, $p0, $r0) = @_;
|
|
|
|
foreach my $x (keys %$tree_paths) {
|
|
|
|
next unless ($p0 =~ /^\Q$x\E/);
|
|
|
|
my $i = $tree_paths->{$x};
|
2006-06-13 13:02:23 +02:00
|
|
|
my ($r, $parent) = find_rev_before($r0, $i, 1);
|
|
|
|
if (defined $r && defined $parent && revisions_eq($x,$r,$r0)) {
|
2006-06-28 04:39:11 +02:00
|
|
|
my ($url_b, undef, $uuid_b) = cmt_metadata($c);
|
|
|
|
my ($url_a, undef, $uuid_a) = cmt_metadata($parent);
|
|
|
|
next if ($url_a && $url_b && $url_a eq $url_b &&
|
|
|
|
$uuid_b eq $uuid_a);
|
2006-06-13 13:02:23 +02:00
|
|
|
$grafts->{$c}->{$parent} = 1;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
sub libsvn_graft_file_copies {
|
|
|
|
my ($grafts, $tree_paths, $path, $paths, $rev) = @_;
|
|
|
|
foreach (keys %$paths) {
|
|
|
|
my $i = $paths->{$_};
|
|
|
|
my ($m, $p0, $r0) = ($i->action, $i->copyfrom_path,
|
|
|
|
$i->copyfrom_rev);
|
|
|
|
next unless (defined $p0 && defined $r0);
|
|
|
|
|
|
|
|
my $p1 = $_;
|
|
|
|
$p1 =~ s#^/##;
|
|
|
|
$p0 =~ s#^/##;
|
|
|
|
my $c = find_graft_path_commit($tree_paths, $p1, $rev);
|
|
|
|
next unless $c;
|
|
|
|
find_graft_path_parents($grafts, $tree_paths, $c, $p0, $r0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
sub set_index {
|
|
|
|
my $old = $ENV{GIT_INDEX_FILE};
|
|
|
|
$ENV{GIT_INDEX_FILE} = shift;
|
|
|
|
return $old;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub restore_index {
|
|
|
|
my ($old) = @_;
|
|
|
|
if (defined $old) {
|
|
|
|
$ENV{GIT_INDEX_FILE} = $old;
|
|
|
|
} else {
|
|
|
|
delete $ENV{GIT_INDEX_FILE};
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
sub libsvn_commit_cb {
|
|
|
|
my ($rev, $date, $committer, $c, $msg, $r_last, $cmt_last) = @_;
|
2006-06-13 13:02:23 +02:00
|
|
|
if ($_optimize_commits && $rev == ($r_last + 1)) {
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
my $log = libsvn_log_entry($rev,$committer,$date,$msg);
|
|
|
|
$log->{tree} = get_tree_from_treeish($c);
|
|
|
|
my $cmt = git_commit($log, $cmt_last, $c);
|
2006-12-15 19:59:54 +01:00
|
|
|
my @diff = command('diff-tree', $cmt, $c);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
if (@diff) {
|
|
|
|
print STDERR "Trees differ: $cmt $c\n",
|
|
|
|
join('',@diff),"\n";
|
|
|
|
exit 1;
|
|
|
|
}
|
|
|
|
} else {
|
2006-06-15 21:50:12 +02:00
|
|
|
fetch("$rev=$c");
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
sub libsvn_ls_fullurl {
|
|
|
|
my $fullurl = shift;
|
2006-11-29 03:51:42 +01:00
|
|
|
my $ra = libsvn_connect($fullurl);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
my @ret;
|
|
|
|
my $pool = SVN::Pool->new;
|
2006-11-29 03:51:42 +01:00
|
|
|
my $r = defined $_revision ? $_revision : $ra->get_latest_revnum;
|
|
|
|
my ($dirent, undef, undef) = $ra->get_dir('', $r, $pool);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
foreach my $d (keys %$dirent) {
|
|
|
|
if ($dirent->{$d}->kind == $SVN::Node::dir) {
|
|
|
|
push @ret, "$d/"; # add '/' for compat with cli svn
|
|
|
|
}
|
|
|
|
}
|
|
|
|
$pool->clear;
|
|
|
|
return @ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
sub libsvn_skip_unknown_revs {
|
|
|
|
my $err = shift;
|
|
|
|
my $errno = $err->apr_err();
|
|
|
|
# Maybe the branch we're tracking didn't
|
|
|
|
# exist when the repo started, so it's
|
|
|
|
# not an error if it doesn't, just continue
|
|
|
|
#
|
|
|
|
# Wonderfully consistent library, eh?
|
|
|
|
# 160013 - svn:// and file://
|
|
|
|
# 175002 - http(s)://
|
2006-11-25 07:38:17 +01:00
|
|
|
# 175007 - http(s):// (this repo required authorization, too...)
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
# More codes may be discovered later...
|
2006-11-25 07:38:17 +01:00
|
|
|
if ($errno == 175007 || $errno == 175002 || $errno == 160013) {
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
croak "Error from SVN, ($errno): ", $err->expanded_message,"\n";
|
|
|
|
};
|
|
|
|
|
2006-06-13 13:02:23 +02:00
|
|
|
# Tie::File seems to be prone to offset errors if revisions get sparse,
|
|
|
|
# it's not that fast, either. Tie::File is also not in Perl 5.6. So
|
|
|
|
# one of my favorite modules is out :< Next up would be one of the DBM
|
|
|
|
# modules, but I'm not sure which is most portable... So I'll just
|
|
|
|
# go with something that's plain-text, but still capable of
|
|
|
|
# being randomly accessed. So here's my ultra-simple fixed-width
|
|
|
|
# database. All records are 40 characters + "\n", so it's easy to seek
|
|
|
|
# to a revision: (41 * rev) is the byte offset.
|
|
|
|
# A record of 40 0s denotes an empty revision.
|
|
|
|
# And yes, it's still pretty fast (faster than Tie::File).
|
|
|
|
sub revdb_set {
|
|
|
|
my ($file, $rev, $commit) = @_;
|
|
|
|
length $commit == 40 or croak "arg3 must be a full SHA1 hexsum\n";
|
|
|
|
open my $fh, '+<', $file or croak $!;
|
|
|
|
my $offset = $rev * 41;
|
|
|
|
# assume that append is the common case:
|
|
|
|
seek $fh, 0, 2 or croak $!;
|
|
|
|
my $pos = tell $fh;
|
|
|
|
if ($pos < $offset) {
|
|
|
|
print $fh (('0' x 40),"\n") x (($offset - $pos) / 41);
|
|
|
|
}
|
|
|
|
seek $fh, $offset, 0 or croak $!;
|
|
|
|
print $fh $commit,"\n";
|
|
|
|
close $fh or croak $!;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub revdb_get {
|
|
|
|
my ($file, $rev) = @_;
|
|
|
|
my $ret;
|
|
|
|
my $offset = $rev * 41;
|
|
|
|
open my $fh, '<', $file or croak $!;
|
|
|
|
seek $fh, $offset, 0;
|
|
|
|
if (tell $fh == $offset) {
|
|
|
|
$ret = readline $fh;
|
|
|
|
if (defined $ret) {
|
|
|
|
chomp $ret;
|
|
|
|
$ret = undef if ($ret =~ /^0{40}$/);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
close $fh or croak $!;
|
|
|
|
return $ret;
|
|
|
|
}
|
|
|
|
|
2006-06-16 11:55:13 +02:00
|
|
|
sub copy_remote_ref {
|
|
|
|
my $origin = $_cp_remote ? $_cp_remote : 'origin';
|
|
|
|
my $ref = "refs/remotes/$GIT_SVN";
|
2006-12-15 19:59:54 +01:00
|
|
|
if (command('ls-remote', $origin, $ref)) {
|
|
|
|
command_noisy('fetch', $origin, "$ref:$ref");
|
2006-11-05 06:51:11 +01:00
|
|
|
} elsif ($_cp_remote && !$_upgrade) {
|
2006-06-16 11:55:13 +02:00
|
|
|
die "Unable to find remote reference: ",
|
|
|
|
"refs/remotes/$GIT_SVN on $origin\n";
|
|
|
|
}
|
|
|
|
}
|
2006-12-16 08:58:07 +01:00
|
|
|
|
|
|
|
{
|
|
|
|
my $kill_stupid_warnings = $SVN::Node::none.$SVN::Node::file.
|
|
|
|
$SVN::Node::dir.$SVN::Node::unknown.
|
|
|
|
$SVN::Node::none.$SVN::Node::file.
|
|
|
|
$SVN::Node::dir.$SVN::Node::unknown.
|
|
|
|
$SVN::Auth::SSL::CNMISMATCH.
|
|
|
|
$SVN::Auth::SSL::NOTYETVALID.
|
|
|
|
$SVN::Auth::SSL::EXPIRED.
|
|
|
|
$SVN::Auth::SSL::UNKNOWNCA.
|
|
|
|
$SVN::Auth::SSL::OTHER;
|
|
|
|
}
|
|
|
|
|
2006-11-28 06:44:48 +01:00
|
|
|
package SVN::Git::Fetcher;
|
|
|
|
use vars qw/@ISA/;
|
|
|
|
use strict;
|
|
|
|
use warnings;
|
|
|
|
use Carp qw/croak/;
|
|
|
|
use IO::File qw//;
|
2006-12-15 19:59:54 +01:00
|
|
|
use Git qw/command command_oneline command_noisy
|
|
|
|
command_output_pipe command_input_pipe command_close_pipe/;
|
2006-11-28 06:44:48 +01:00
|
|
|
|
|
|
|
# file baton members: path, mode_a, mode_b, pool, fh, blob, base
|
|
|
|
sub new {
|
|
|
|
my ($class, $git_svn) = @_;
|
|
|
|
my $self = SVN::Delta::Editor->new;
|
|
|
|
bless $self, $class;
|
|
|
|
$self->{c} = $git_svn->{c} if exists $git_svn->{c};
|
2006-11-28 11:50:17 +01:00
|
|
|
$self->{q} = $git_svn->{q};
|
2006-12-12 23:47:00 +01:00
|
|
|
$self->{empty} = {};
|
|
|
|
$self->{dir_prop} = {};
|
|
|
|
$self->{file_prop} = {};
|
|
|
|
$self->{absent_dir} = {};
|
|
|
|
$self->{absent_file} = {};
|
2006-12-15 19:59:54 +01:00
|
|
|
($self->{gui}, $self->{ctx}) = command_input_pipe(
|
|
|
|
qw/update-index -z --index-info/);
|
2006-11-28 06:44:48 +01:00
|
|
|
require Digest::MD5;
|
|
|
|
$self;
|
|
|
|
}
|
|
|
|
|
2006-12-12 23:47:00 +01:00
|
|
|
sub open_root {
|
|
|
|
{ path => '' };
|
|
|
|
}
|
|
|
|
|
|
|
|
sub open_directory {
|
|
|
|
my ($self, $path, $pb, $rev) = @_;
|
|
|
|
{ path => $path };
|
|
|
|
}
|
|
|
|
|
2006-11-28 06:44:48 +01:00
|
|
|
sub delete_entry {
|
|
|
|
my ($self, $path, $rev, $pb) = @_;
|
2006-12-12 23:47:00 +01:00
|
|
|
my $t = process_rm($self->{gui}, $self->{c}, $path, $self->{q});
|
|
|
|
$self->{empty}->{$path} = 0 if $t == $SVN::Node::dir;
|
2006-11-28 06:44:48 +01:00
|
|
|
undef;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub open_file {
|
|
|
|
my ($self, $path, $pb, $rev) = @_;
|
2006-12-15 19:59:54 +01:00
|
|
|
my ($mode, $blob) = (command('ls-tree', $self->{c}, '--',$path)
|
2006-11-28 06:44:48 +01:00
|
|
|
=~ /^(\d{6}) blob ([a-f\d]{40})\t/);
|
2006-12-08 10:55:19 +01:00
|
|
|
unless (defined $mode && defined $blob) {
|
|
|
|
die "$path was not found in commit $self->{c} (r$rev)\n";
|
|
|
|
}
|
2006-11-28 06:44:48 +01:00
|
|
|
{ path => $path, mode_a => $mode, mode_b => $mode, blob => $blob,
|
2006-11-28 11:50:17 +01:00
|
|
|
pool => SVN::Pool->new, action => 'M' };
|
2006-11-28 06:44:48 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
sub add_file {
|
|
|
|
my ($self, $path, $pb, $cp_path, $cp_rev) = @_;
|
2006-12-12 23:47:00 +01:00
|
|
|
my ($dir, $file) = ($path =~ m#^(.*?)/?([^/]+)$#);
|
|
|
|
delete $self->{empty}->{$dir};
|
2006-11-28 06:44:48 +01:00
|
|
|
{ path => $path, mode_a => 100644, mode_b => 100644,
|
2006-11-28 11:50:17 +01:00
|
|
|
pool => SVN::Pool->new, action => 'A' };
|
2006-11-28 06:44:48 +01:00
|
|
|
}
|
|
|
|
|
2006-12-12 23:47:00 +01:00
|
|
|
sub add_directory {
|
|
|
|
my ($self, $path, $cp_path, $cp_rev) = @_;
|
|
|
|
my ($dir, $file) = ($path =~ m#^(.*?)/?([^/]+)$#);
|
|
|
|
delete $self->{empty}->{$dir};
|
|
|
|
$self->{empty}->{$path} = 1;
|
|
|
|
{ path => $path };
|
|
|
|
}
|
|
|
|
|
|
|
|
sub change_dir_prop {
|
|
|
|
my ($self, $db, $prop, $value) = @_;
|
|
|
|
$self->{dir_prop}->{$db->{path}} ||= {};
|
|
|
|
$self->{dir_prop}->{$db->{path}}->{$prop} = $value;
|
|
|
|
undef;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub absent_directory {
|
|
|
|
my ($self, $path, $pb) = @_;
|
|
|
|
$self->{absent_dir}->{$pb->{path}} ||= [];
|
|
|
|
push @{$self->{absent_dir}->{$pb->{path}}}, $path;
|
|
|
|
undef;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub absent_file {
|
|
|
|
my ($self, $path, $pb) = @_;
|
|
|
|
$self->{absent_file}->{$pb->{path}} ||= [];
|
|
|
|
push @{$self->{absent_file}->{$pb->{path}}}, $path;
|
|
|
|
undef;
|
|
|
|
}
|
|
|
|
|
2006-11-28 06:44:48 +01:00
|
|
|
sub change_file_prop {
|
|
|
|
my ($self, $fb, $prop, $value) = @_;
|
|
|
|
if ($prop eq 'svn:executable') {
|
|
|
|
if ($fb->{mode_b} != 120000) {
|
|
|
|
$fb->{mode_b} = defined $value ? 100755 : 100644;
|
|
|
|
}
|
|
|
|
} elsif ($prop eq 'svn:special') {
|
|
|
|
$fb->{mode_b} = defined $value ? 120000 : 100644;
|
2006-12-12 23:47:00 +01:00
|
|
|
} else {
|
|
|
|
$self->{file_prop}->{$fb->{path}} ||= {};
|
|
|
|
$self->{file_prop}->{$fb->{path}}->{$prop} = $value;
|
2006-11-28 06:44:48 +01:00
|
|
|
}
|
|
|
|
undef;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub apply_textdelta {
|
|
|
|
my ($self, $fb, $exp) = @_;
|
|
|
|
my $fh = IO::File->new_tmpfile;
|
|
|
|
$fh->autoflush(1);
|
|
|
|
# $fh gets auto-closed() by SVN::TxDelta::apply(),
|
|
|
|
# (but $base does not,) so dup() it for reading in close_file
|
|
|
|
open my $dup, '<&', $fh or croak $!;
|
|
|
|
my $base = IO::File->new_tmpfile;
|
|
|
|
$base->autoflush(1);
|
|
|
|
if ($fb->{blob}) {
|
|
|
|
defined (my $pid = fork) or croak $!;
|
|
|
|
if (!$pid) {
|
|
|
|
open STDOUT, '>&', $base or croak $!;
|
|
|
|
print STDOUT 'link ' if ($fb->{mode_a} == 120000);
|
|
|
|
exec qw/git-cat-file blob/, $fb->{blob} or croak $!;
|
|
|
|
}
|
|
|
|
waitpid $pid, 0;
|
|
|
|
croak $? if $?;
|
|
|
|
|
|
|
|
if (defined $exp) {
|
|
|
|
seek $base, 0, 0 or croak $!;
|
|
|
|
my $md5 = Digest::MD5->new;
|
|
|
|
$md5->addfile($base);
|
|
|
|
my $got = $md5->hexdigest;
|
|
|
|
die "Checksum mismatch: $fb->{path} $fb->{blob}\n",
|
|
|
|
"expected: $exp\n",
|
|
|
|
" got: $got\n" if ($got ne $exp);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
seek $base, 0, 0 or croak $!;
|
|
|
|
$fb->{fh} = $dup;
|
|
|
|
$fb->{base} = $base;
|
|
|
|
[ SVN::TxDelta::apply($base, $fh, undef, $fb->{path}, $fb->{pool}) ];
|
|
|
|
}
|
|
|
|
|
|
|
|
sub close_file {
|
|
|
|
my ($self, $fb, $exp) = @_;
|
|
|
|
my $hash;
|
|
|
|
my $path = $fb->{path};
|
|
|
|
if (my $fh = $fb->{fh}) {
|
|
|
|
seek($fh, 0, 0) or croak $!;
|
|
|
|
my $md5 = Digest::MD5->new;
|
|
|
|
$md5->addfile($fh);
|
|
|
|
my $got = $md5->hexdigest;
|
|
|
|
die "Checksum mismatch: $path\n",
|
|
|
|
"expected: $exp\n got: $got\n" if ($got ne $exp);
|
|
|
|
seek($fh, 0, 0) or croak $!;
|
|
|
|
if ($fb->{mode_b} == 120000) {
|
|
|
|
read($fh, my $buf, 5) == 5 or croak $!;
|
|
|
|
$buf eq 'link ' or die "$path has mode 120000",
|
|
|
|
"but is not a link\n";
|
|
|
|
}
|
|
|
|
defined(my $pid = open my $out,'-|') or die "Can't fork: $!\n";
|
|
|
|
if (!$pid) {
|
|
|
|
open STDIN, '<&', $fh or croak $!;
|
|
|
|
exec qw/git-hash-object -w --stdin/ or croak $!;
|
|
|
|
}
|
|
|
|
chomp($hash = do { local $/; <$out> });
|
|
|
|
close $out or croak $!;
|
|
|
|
close $fh or croak $!;
|
|
|
|
$hash =~ /^[a-f\d]{40}$/ or die "not a sha1: $hash\n";
|
|
|
|
close $fb->{base} or croak $!;
|
|
|
|
} else {
|
|
|
|
$hash = $fb->{blob} or die "no blob information\n";
|
|
|
|
}
|
|
|
|
$fb->{pool}->clear;
|
|
|
|
my $gui = $self->{gui};
|
|
|
|
print $gui "$fb->{mode_b} $hash\t$path\0" or croak $!;
|
2006-11-28 11:50:17 +01:00
|
|
|
print "\t$fb->{action}\t$path\n" if $fb->{action} && ! $self->{q};
|
2006-11-28 06:44:48 +01:00
|
|
|
undef;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub abort_edit {
|
|
|
|
my $self = shift;
|
2006-12-15 19:59:54 +01:00
|
|
|
eval { command_close_pipe($self->{gui}, $self->{ctx}) };
|
2006-11-28 06:44:48 +01:00
|
|
|
$self->SUPER::abort_edit(@_);
|
|
|
|
}
|
|
|
|
|
|
|
|
sub close_edit {
|
|
|
|
my $self = shift;
|
2006-12-15 19:59:54 +01:00
|
|
|
command_close_pipe($self->{gui}, $self->{ctx});
|
2006-11-28 23:06:05 +01:00
|
|
|
$self->{git_commit_ok} = 1;
|
2006-11-28 06:44:48 +01:00
|
|
|
$self->SUPER::close_edit(@_);
|
|
|
|
}
|
2006-06-16 11:55:13 +02:00
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
package SVN::Git::Editor;
|
|
|
|
use vars qw/@ISA/;
|
|
|
|
use strict;
|
|
|
|
use warnings;
|
|
|
|
use Carp qw/croak/;
|
|
|
|
use IO::File;
|
2006-12-15 19:59:54 +01:00
|
|
|
use Git qw/command command_oneline command_noisy
|
|
|
|
command_output_pipe command_input_pipe command_close_pipe/;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
|
|
|
|
sub new {
|
|
|
|
my $class = shift;
|
|
|
|
my $git_svn = shift;
|
|
|
|
my $self = SVN::Delta::Editor->new(@_);
|
|
|
|
bless $self, $class;
|
|
|
|
foreach (qw/svn_path c r ra /) {
|
|
|
|
die "$_ required!\n" unless (defined $git_svn->{$_});
|
|
|
|
$self->{$_} = $git_svn->{$_};
|
|
|
|
}
|
|
|
|
$self->{pool} = SVN::Pool->new;
|
|
|
|
$self->{bat} = { '' => $self->open_root($self->{r}, $self->{pool}) };
|
|
|
|
$self->{rm} = { };
|
|
|
|
require Digest::MD5;
|
|
|
|
return $self;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub split_path {
|
|
|
|
return ($_[0] =~ m#^(.*?)/?([^/]+)$#);
|
|
|
|
}
|
|
|
|
|
|
|
|
sub repo_path {
|
2006-11-25 07:38:17 +01:00
|
|
|
(defined $_[1] && length $_[1]) ? $_[1] : ''
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
sub url_path {
|
|
|
|
my ($self, $path) = @_;
|
|
|
|
$self->{ra}->{url} . '/' . $self->repo_path($path);
|
|
|
|
}
|
|
|
|
|
|
|
|
sub rmdirs {
|
2006-06-28 04:39:14 +02:00
|
|
|
my ($self, $q) = @_;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
my $rm = $self->{rm};
|
|
|
|
delete $rm->{''}; # we never delete the url we're tracking
|
|
|
|
return unless %$rm;
|
|
|
|
|
|
|
|
foreach (keys %$rm) {
|
|
|
|
my @d = split m#/#, $_;
|
|
|
|
my $c = shift @d;
|
|
|
|
$rm->{$c} = 1;
|
|
|
|
while (@d) {
|
|
|
|
$c .= '/' . shift @d;
|
|
|
|
$rm->{$c} = 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
delete $rm->{$self->{svn_path}};
|
|
|
|
delete $rm->{''}; # we never delete the url we're tracking
|
|
|
|
return unless %$rm;
|
|
|
|
|
2006-12-15 19:59:54 +01:00
|
|
|
my ($fh, $ctx) = command_output_pipe(
|
|
|
|
qw/ls-tree --name-only -r -z/, $self->{c});
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
local $/ = "\0";
|
|
|
|
while (<$fh>) {
|
|
|
|
chomp;
|
2006-11-25 07:38:17 +01:00
|
|
|
my @dn = split m#/#, $_;
|
2006-06-20 02:59:35 +02:00
|
|
|
while (pop @dn) {
|
|
|
|
delete $rm->{join '/', @dn};
|
|
|
|
}
|
|
|
|
unless (%$rm) {
|
2006-12-15 19:59:54 +01:00
|
|
|
eval { command_close_pipe($fh) };
|
2006-06-20 02:59:35 +02:00
|
|
|
return;
|
|
|
|
}
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
}
|
2006-12-15 19:59:54 +01:00
|
|
|
command_close_pipe($fh, $ctx);
|
2006-06-20 02:59:35 +02:00
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
my ($r, $p, $bat) = ($self->{r}, $self->{pool}, $self->{bat});
|
|
|
|
foreach my $d (sort { $b =~ tr#/#/# <=> $a =~ tr#/#/# } keys %$rm) {
|
|
|
|
$self->close_directory($bat->{$d}, $p);
|
|
|
|
my ($dn) = ($d =~ m#^(.*?)/?(?:[^/]+)$#);
|
2006-06-28 04:39:14 +02:00
|
|
|
print "\tD+\t/$d/\n" unless $q;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
$self->SUPER::delete_entry($d, $r, $bat->{$dn}, $p);
|
|
|
|
delete $bat->{$d};
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
sub open_or_add_dir {
|
|
|
|
my ($self, $full_path, $baton) = @_;
|
|
|
|
my $p = SVN::Pool->new;
|
|
|
|
my $t = $self->{ra}->check_path($full_path, $self->{r}, $p);
|
|
|
|
$p->clear;
|
|
|
|
if ($t == $SVN::Node::none) {
|
|
|
|
return $self->add_directory($full_path, $baton,
|
|
|
|
undef, -1, $self->{pool});
|
|
|
|
} elsif ($t == $SVN::Node::dir) {
|
|
|
|
return $self->open_directory($full_path, $baton,
|
|
|
|
$self->{r}, $self->{pool});
|
|
|
|
}
|
|
|
|
print STDERR "$full_path already exists in repository at ",
|
|
|
|
"r$self->{r} and it is not a directory (",
|
|
|
|
($t == $SVN::Node::file ? 'file' : 'unknown'),"/$t)\n";
|
|
|
|
exit 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub ensure_path {
|
|
|
|
my ($self, $path) = @_;
|
|
|
|
my $bat = $self->{bat};
|
|
|
|
$path = $self->repo_path($path);
|
|
|
|
return $bat->{''} unless (length $path);
|
|
|
|
my @p = split m#/+#, $path;
|
|
|
|
my $c = shift @p;
|
|
|
|
$bat->{$c} ||= $self->open_or_add_dir($c, $bat->{''});
|
|
|
|
while (@p) {
|
|
|
|
my $c0 = $c;
|
|
|
|
$c .= '/' . shift @p;
|
|
|
|
$bat->{$c} ||= $self->open_or_add_dir($c, $bat->{$c0});
|
|
|
|
}
|
|
|
|
return $bat->{$c};
|
|
|
|
}
|
|
|
|
|
|
|
|
sub A {
|
2006-06-28 04:39:14 +02:00
|
|
|
my ($self, $m, $q) = @_;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
my ($dir, $file) = split_path($m->{file_b});
|
|
|
|
my $pbat = $self->ensure_path($dir);
|
|
|
|
my $fbat = $self->add_file($self->repo_path($m->{file_b}), $pbat,
|
|
|
|
undef, -1);
|
2006-06-28 04:39:14 +02:00
|
|
|
print "\tA\t$m->{file_b}\n" unless $q;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
$self->chg_file($fbat, $m);
|
|
|
|
$self->close_file($fbat,undef,$self->{pool});
|
|
|
|
}
|
|
|
|
|
|
|
|
sub C {
|
2006-06-28 04:39:14 +02:00
|
|
|
my ($self, $m, $q) = @_;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
my ($dir, $file) = split_path($m->{file_b});
|
|
|
|
my $pbat = $self->ensure_path($dir);
|
|
|
|
my $fbat = $self->add_file($self->repo_path($m->{file_b}), $pbat,
|
|
|
|
$self->url_path($m->{file_a}), $self->{r});
|
2006-06-28 04:39:14 +02:00
|
|
|
print "\tC\t$m->{file_a} => $m->{file_b}\n" unless $q;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
$self->chg_file($fbat, $m);
|
|
|
|
$self->close_file($fbat,undef,$self->{pool});
|
|
|
|
}
|
|
|
|
|
|
|
|
sub delete_entry {
|
|
|
|
my ($self, $path, $pbat) = @_;
|
|
|
|
my $rpath = $self->repo_path($path);
|
|
|
|
my ($dir, $file) = split_path($rpath);
|
|
|
|
$self->{rm}->{$dir} = 1;
|
|
|
|
$self->SUPER::delete_entry($rpath, $self->{r}, $pbat, $self->{pool});
|
|
|
|
}
|
|
|
|
|
|
|
|
sub R {
|
2006-06-28 04:39:14 +02:00
|
|
|
my ($self, $m, $q) = @_;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
my ($dir, $file) = split_path($m->{file_b});
|
|
|
|
my $pbat = $self->ensure_path($dir);
|
|
|
|
my $fbat = $self->add_file($self->repo_path($m->{file_b}), $pbat,
|
|
|
|
$self->url_path($m->{file_a}), $self->{r});
|
2006-06-28 04:39:14 +02:00
|
|
|
print "\tR\t$m->{file_a} => $m->{file_b}\n" unless $q;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
$self->chg_file($fbat, $m);
|
|
|
|
$self->close_file($fbat,undef,$self->{pool});
|
|
|
|
|
|
|
|
($dir, $file) = split_path($m->{file_a});
|
|
|
|
$pbat = $self->ensure_path($dir);
|
|
|
|
$self->delete_entry($m->{file_a}, $pbat);
|
|
|
|
}
|
|
|
|
|
|
|
|
sub M {
|
2006-06-28 04:39:14 +02:00
|
|
|
my ($self, $m, $q) = @_;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
my ($dir, $file) = split_path($m->{file_b});
|
|
|
|
my $pbat = $self->ensure_path($dir);
|
|
|
|
my $fbat = $self->open_file($self->repo_path($m->{file_b}),
|
|
|
|
$pbat,$self->{r},$self->{pool});
|
2006-06-28 04:39:14 +02:00
|
|
|
print "\t$m->{chg}\t$m->{file_b}\n" unless $q;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
$self->chg_file($fbat, $m);
|
|
|
|
$self->close_file($fbat,undef,$self->{pool});
|
|
|
|
}
|
|
|
|
|
|
|
|
sub T { shift->M(@_) }
|
|
|
|
|
|
|
|
sub change_file_prop {
|
|
|
|
my ($self, $fbat, $pname, $pval) = @_;
|
|
|
|
$self->SUPER::change_file_prop($fbat, $pname, $pval, $self->{pool});
|
|
|
|
}
|
|
|
|
|
|
|
|
sub chg_file {
|
|
|
|
my ($self, $fbat, $m) = @_;
|
|
|
|
if ($m->{mode_b} =~ /755$/ && $m->{mode_a} !~ /755$/) {
|
|
|
|
$self->change_file_prop($fbat,'svn:executable','*');
|
|
|
|
} elsif ($m->{mode_b} !~ /755$/ && $m->{mode_a} =~ /755$/) {
|
|
|
|
$self->change_file_prop($fbat,'svn:executable',undef);
|
|
|
|
}
|
|
|
|
my $fh = IO::File->new_tmpfile or croak $!;
|
|
|
|
if ($m->{mode_b} =~ /^120/) {
|
|
|
|
print $fh 'link ' or croak $!;
|
|
|
|
$self->change_file_prop($fbat,'svn:special','*');
|
|
|
|
} elsif ($m->{mode_a} =~ /^120/ && $m->{mode_b} !~ /^120/) {
|
|
|
|
$self->change_file_prop($fbat,'svn:special',undef);
|
|
|
|
}
|
|
|
|
defined(my $pid = fork) or croak $!;
|
|
|
|
if (!$pid) {
|
|
|
|
open STDOUT, '>&', $fh or croak $!;
|
|
|
|
exec qw/git-cat-file blob/, $m->{sha1_b} or croak $!;
|
|
|
|
}
|
|
|
|
waitpid $pid, 0;
|
|
|
|
croak $? if $?;
|
|
|
|
$fh->flush == 0 or croak $!;
|
|
|
|
seek $fh, 0, 0 or croak $!;
|
|
|
|
|
|
|
|
my $md5 = Digest::MD5->new;
|
|
|
|
$md5->addfile($fh) or croak $!;
|
|
|
|
seek $fh, 0, 0 or croak $!;
|
|
|
|
|
|
|
|
my $exp = $md5->hexdigest;
|
2006-10-15 00:48:35 +02:00
|
|
|
my $pool = SVN::Pool->new;
|
|
|
|
my $atd = $self->apply_textdelta($fbat, undef, $pool);
|
|
|
|
my $got = SVN::TxDelta::send_stream($fh, @$atd, $pool);
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
die "Checksum mismatch\nexpected: $exp\ngot: $got\n" if ($got ne $exp);
|
2006-10-15 00:48:35 +02:00
|
|
|
$pool->clear;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
|
|
|
|
close $fh or croak $!;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub D {
|
2006-06-28 04:39:14 +02:00
|
|
|
my ($self, $m, $q) = @_;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
my ($dir, $file) = split_path($m->{file_b});
|
|
|
|
my $pbat = $self->ensure_path($dir);
|
2006-06-28 04:39:14 +02:00
|
|
|
print "\tD\t$m->{file_b}\n" unless $q;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
$self->delete_entry($m->{file_b}, $pbat);
|
|
|
|
}
|
|
|
|
|
|
|
|
sub close_edit {
|
|
|
|
my ($self) = @_;
|
|
|
|
my ($p,$bat) = ($self->{pool}, $self->{bat});
|
|
|
|
foreach (sort { $b =~ tr#/#/# <=> $a =~ tr#/#/# } keys %$bat) {
|
|
|
|
$self->close_directory($bat->{$_}, $p);
|
|
|
|
}
|
|
|
|
$self->SUPER::close_edit($p);
|
|
|
|
$p->clear;
|
|
|
|
}
|
|
|
|
|
|
|
|
sub abort_edit {
|
|
|
|
my ($self) = @_;
|
|
|
|
$self->SUPER::abort_edit($self->{pool});
|
|
|
|
$self->{pool}->clear;
|
|
|
|
}
|
|
|
|
|
2006-02-16 10:24:16 +01:00
|
|
|
__END__
|
|
|
|
|
|
|
|
Data structures:
|
|
|
|
|
2006-12-16 08:58:07 +01:00
|
|
|
$log_msg hashref as returned by libsvn_log_entry()
|
2006-02-16 10:24:16 +01:00
|
|
|
{
|
|
|
|
msg => 'whitespace-formatted log entry
|
|
|
|
', # trailing newline is preserved
|
|
|
|
revision => '8', # integer
|
|
|
|
date => '2004-02-24T17:01:44.108345Z', # commit date
|
|
|
|
author => 'committer name'
|
|
|
|
};
|
|
|
|
|
|
|
|
@mods = array of diff-index line hashes, each element represents one line
|
|
|
|
of diff-index output
|
|
|
|
|
|
|
|
diff-index line ($m hash)
|
|
|
|
{
|
|
|
|
mode_a => first column of diff-index output, no leading ':',
|
|
|
|
mode_b => second column of diff-index output,
|
|
|
|
sha1_b => sha1sum of the final blob,
|
2006-03-03 10:20:07 +01:00
|
|
|
chg => change type [MCRADT],
|
2006-02-16 10:24:16 +01:00
|
|
|
file_a => original file name of a file (iff chg is 'C' or 'R')
|
|
|
|
file_b => new/current file name of a file (any chg)
|
|
|
|
}
|
|
|
|
;
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
|
git-svn: add --follow-parent and --no-metadata options to fetch
--follow-parent:
This is especially helpful when we're tracking a directory
that has been moved around within the repository, or if we
started tracking a branch and never tracked the trunk it was
descended from.
This relies on the SVN::* libraries to work. We can't
reliably parse path info from the svn command-line client
without relying on XML, so it's better just to have the SVN::*
libs installed.
This also removes oldvalue verification when calling update-ref
In SVN, branches can be deleted, and then recreated under the
same path as the original one with different ancestry
information, causing parent information to be mismatched /
misordered.
Also force the current ref, if existing, to be a parent,
regardless of whether or not it was specified.
--no-metadata:
This gets rid of the git-svn-id: lines at the end of every commit.
With this, you lose the ability to use the rebuild command. If
you ever lose your .git/svn/git-svn/.rev_db file, you won't be
able to fetch again, either. This is fine for one-shot imports.
Also fix some issues with multi-fetch --follow-parent that were
exposed while testing this. Additionally, repack checking is
simplified greatly.
git-svn log will not work on repositories using this, either.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-28 04:39:13 +02:00
|
|
|
# retval of read_url_paths{,_all}();
|
|
|
|
$l_map = {
|
|
|
|
# repository root url
|
|
|
|
'https://svn.musicpd.org' => {
|
|
|
|
# repository path # GIT_SVN_ID
|
|
|
|
'mpd/trunk' => 'trunk',
|
|
|
|
'mpd/tags/0.11.5' => 'tags/0.11.5',
|
|
|
|
},
|
|
|
|
}
|
|
|
|
|
git-svn: add support for Perl SVN::* libraries
This means we no longer have to deal with having bloated SVN
working copies around and we get a nice performance increase as
well because we don't have to exec the SVN binary and start a
new server connection each time.
Of course we have to manually manage memory with SVN::Pool
whenever we can, and hack around cases where SVN just eats
memory despite pools (I blame Perl, too). I would like to
keep memory usage as stable as possible during long fetch/commit
processes since I still use computers with only 256-512M RAM.
commit should always be faster with the SVN library code. The
SVN::Delta interface is leaky (or I'm not using it with pools
correctly), so I'm forking on every commit, but that doesn't
seem to hurt performance too much (at least on normal Unix/Linux
systems where fork() is pretty cheap).
fetch should be faster in most common cases, but probably not all.
fetches will be faster where client/server delta generation is
the bottleneck and not bandwidth. Of course, full-files are
generated server-side via deltas, too. Full files are always
transferred when they're updated, just like git-svnimport and
unlike command-line svn. I'm also hacking around memory leaks
(see comments) here by using some more forks.
I've tested fetch with http://, https://, file://, and svn://
repositories, so we should be reasonably covered in terms of
error handling for fetching.
Of course, we'll keep plain command-line svn compatibility as a
fallback for people running SVN 1.1 (I'm looking into library
support for 1.1.x SVN, too). If you want to force command-line
SVN usage, set GIT_SVN_NO_LIB=1 in your environment.
We also require two simultaneous connections (just like
git-svnimport), but this shouldn't be a problem for most
servers.
Less important commands:
show-ignore is slower because it requires repository
access, but -r/--revision <num> can be specified.
graft-branches may use more memory, but it's a
short-term process and is funky-filename-safe.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
2006-06-13 00:23:48 +02:00
|
|
|
Notes:
|
|
|
|
I don't trust the each() function on unless I created %hash myself
|
|
|
|
because the internal iterator may not have started at base.
|