The prior commit converted many sites to check the return
value of write_in_full() for negativity, rather than a
mismatch with the input length. This patch covers similar
cases, but where the return value is stored in an
intermediate variable. These should get the same treatment,
but they need to be reviewed more carefully since it would
be a bug if the return value is stored in an unsigned type
(which indeed, it is in one of the cases).
Signed-off-by: Jeff King <peff@peff.net>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The return value of write_in_full() is either "-1", or the
requested number of bytes[1]. If we make a partial write
before seeing an error, we still return -1, not a partial
value. This goes back to f6aa66cb95 (write_in_full: really
write in full or return error on disk full., 2007-01-11).
So checking anything except "was the return value negative"
is pointless. And there are a couple of reasons not to do
so:
1. It can do a funny signed/unsigned comparison. If your
"len" is signed (e.g., a size_t) then the compiler will
promote the "-1" to its unsigned variant.
This works out for "!= len" (unless you really were
trying to write the maximum size_t bytes), but is a
bug if you check "< len" (an example of which was fixed
recently in config.c).
We should avoid promoting the mental model that you
need to check the length at all, so that new sites are
not tempted to copy us.
2. Checking for a negative value is shorter to type,
especially when the length is an expression.
3. Linus says so. In d34cf19b89 (Clean up write_in_full()
users, 2007-01-11), right after the write_in_full()
semantics were changed, he wrote:
I really wish every "write_in_full()" user would just
check against "<0" now, but this fixes the nasty and
stupid ones.
Appeals to authority aside, this makes it clear that
writing it this way does not have an intentional
benefit. It's a historical curiosity that we never
bothered to clean up (and which was undoubtedly
cargo-culted into new sites).
So let's convert these obviously-correct cases (this
includes write_str_in_full(), which is just a wrapper for
write_in_full()).
[1] A careful reader may notice there is one way that
write_in_full() can return a different value. If we ask
write() to write N bytes and get a return value that is
_larger_ than N, we could return a larger total. But
besides the fact that this would imply a totally broken
version of write(), it would already invoke undefined
behavior. Our internal remaining counter is an unsigned
size_t, which means that subtracting too many byte will
wrap it around to a very large number. So we'll instantly
begin reading off the end of the buffer, trying to write
gigabytes (or petabytes) of data.
Signed-off-by: Jeff King <peff@peff.net>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Bugfix for a topic that is (only) in 'master'.
* mh/packed-ref-store-prep:
for_each_bisect_ref(): don't trim refnames
lock_packed_refs(): fix cache validity check
A common pattern to free a piece of memory and assign NULL to the
pointer that used to point at it has been replaced with a new
FREE_AND_NULL() macro.
* ab/free-and-null:
*.[ch] refactoring: make use of the FREE_AND_NULL() macro
coccinelle: make use of the "expression" FREE_AND_NULL() rule
coccinelle: add a rule to make "expression" code use FREE_AND_NULL()
coccinelle: make use of the "type" FREE_AND_NULL() rule
coccinelle: add a rule to make "type" code use FREE_AND_NULL()
git-compat-util: add a FREE_AND_NULL() wrapper around free(ptr); ptr = NULL
Fix configuration codepath to pay proper attention to commondir
that is used in multi-worktree situation, and isolate config API
into its own header file.
* bw/config-h:
config: don't implicitly use gitdir or commondir
config: respect commondir
setup: teach discover_git_directory to respect the commondir
config: don't include config.h by default
config: remove git_config_iter
config: create config.h
Replace occurrences of `free(ptr); ptr = NULL` which weren't caught by
the coccinelle rule. These fall into two categories:
- free/NULL assignments one after the other which coccinelle all put
on one line, which is functionally equivalent code, but very ugly.
- manually spotted occurrences where the NULL assignment isn't right
after the free() call.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Apply the result of the just-added coccinelle rule. This manually
excludes a few occurrences, mostly things that resulted in many
FREE_AND_NULL() on one line, that'll be manually fixed in a subsequent
change.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Stop including config.h by default in cache.h. Instead only include
config.h in those files which require use of the config system.
Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit 28ed9830b1 (get_packed_ref_cache(): assume "packed-refs" won't
change while locked, 2017-05-22) assumes that the "packed-refs" file
cannot change while we hold the lock. That assumption is
justified *if* the lock has been held the whole time since the
"packed-refs" file was last read.
But in `lock_packed_refs()`, we ourselves lock the "packed-refs" file
and then call `get_packed_ref_cache()` to ensure that the cache agrees
with the file. The intent is to guard against the possibility that
another process changed the "packed-refs" file the moment before we
locked it.
This check was defeated because `get_packed_ref_cache()` saw that the
file was locked, and therefore didn't do the `stat_validity_check()`
that we want.
The mistake was compounded with a misleading comment in
`lock_packed_refs()` claiming that it was doing the right thing. That
comment came from an earlier draft of the mh/packed-ref-store-prep
patch series when the commits were in a different order.
So instead:
* Extract a function `validate_packed_ref_cache()` that does the
validity check independent of whether the lock is held.
* Change `get_packed_ref_cache()` to call the new function, but only
if the lock *isn't* held.
* Change `lock_packed_refs()` to call the new function in any case
before calling `get_packed_ref_cache()`.
* Fix the comment in `lock_packed_refs()`.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When iterating over references, reference priming is used to make sure
that loose references are read into the ref-cache before packed
references, to avoid races. It used to be that the prefix passed to
reference iterators almost always ended in `/`, for example
`refs/heads/`. In that case, the priming code would read all loose
references under `find_containing_dir("refs/heads/")`, which is
"refs/heads/". That's just what we want.
But now that `ref-filter` knows how to pass refname prefixes to
`for_each_fullref_in()`, the prefix might come from user input; for
example,
git for-each-ref refs/heads
Since the argument doesn't include a trailing slash, the reference
iteration code would prime all of the loose references under
`find_containing_dir("refs/heads")`, which is "refs/". Thus we would
unnecessarily read tags, remote-tracking references, etc., when the
user is only interested in branches.
It is a bit awkward to get around this problem. We can't just append a
slash to the argument, because we don't know ab initio whether an
argument like `refs/tags/release` corresponds to a single tag or to a
directory containing tags.
Moreover, until now a `prefix_ref_iterator` was used to make the final
decision about which references fall within the prefix (the
`cache_ref_iterator` only did a rough cut). This is also inefficient,
because the `prefix_ref_iterator` can't know, for example, that while
you are in a subdirectory that is completely within the prefix, you
don't have to do the prefix check.
So:
* Move the responsibility for doing the prefix check directly to
`cache_ref_iterator`. This means that `cache_ref_iterator_begin()`
never has to wrap its return value in a `prefix_ref_iterator`.
* Teach `cache_ref_iterator_begin()` (and `prime_ref_dir()`) to be
stricter about what they iterate over and what directories they
prime.
* Teach `cache_ref_iterator` to keep track of whether the current
`cache_ref_iterator_level` is fully within the prefix. If so, skip
the prefix checks entirely.
The main benefit of these optimizations is for loose references, since
packed references are always read all at once.
Note that after this change, `prefix_ref_iterator` is only ever used
for its trimming feature and not for its "prefix" feature. But I'm not
ripping out the latter yet, because it might be useful for another
patch series that I'm working on.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Only one caller was using it, so move the check to that caller.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Instead of handling `GIT_REF_PARANOIA` in
`files_ref_iterator_begin()`, handle it in
`refs_ref_iterator_begin()`, where it will cover all reference stores.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The old code ignored any errors encountered when trying to fopen the
"packed-refs" file, treating all such failures as if the file didn't
exist. But it could be that there is some other error opening the
file (e.g., permissions problems), and we don't want to silently
ignore such problems. So report any failures that are not due to
ENOENT.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Teach `read_packed_refs()` to also
* Allocate and initialize the new `packed_ref_cache`
* Open and close the `packed-refs` file
* Update the `validity` field of the new object
This decreases the coupling between `packed_refs_cache` and
`files_ref_store` by a little bit.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If we've got the "packed-refs" file locked, then it can't change;
there's no need to keep calling `stat_validity_check()` on it.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Extract a function for deciding whether a reference should be packed.
It is a self-contained bit of logic, so splitting it out improves
readability.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the future, compound reference stores will sometimes need to modify
references in two different reference stores at the same time, meaning
that a single logical reference transaction might have to be
implemented as two internal sub-transactions. They won't want to call
`ref_transaction_commit()` for the two sub-transactions one after the
other, because that wouldn't be atomic (the first commit could succeed
and the second one fail). Instead, they will want to prepare both
sub-transactions (i.e., obtain any necessary locks and do any
pre-checks), and only if both prepare steps succeed, then commit both
sub-transactions.
Start preparing for that day by adding a new, optional
`ref_transaction_prepare()` step to the reference transaction
sequence, which obtains the locks and does any prechecks, reporting
any errors that occur. Also add a `ref_transaction_abort()` function
that can be used to abort a sub-transaction even if it has already
been prepared.
That is on the side of the public-facing API. On the side of the
`ref_store` VTABLE, get rid of `transaction_commit` and instead add
methods `transaction_prepare`, `transaction_finish`, and
`transaction_abort`. A `ref_transaction_commit()` now basically calls
methods `transaction_prepare` then `transaction_finish`.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Move the check that `transaction->state` is valid from
`files_transaction_commit()` to `ref_transaction_commit()`, where
other future reference backends can benefit from it as well.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Extract the cleanup functionality from `files_transaction_commit()`
into a new function. It will soon have another caller.
Use the common cleanup code even on early exit if the transaction is
empty, to reduce code duplication.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Instead of using a global `lock_file` instance for the main
"packed-refs" file and using a pointer in `files_ref_store` to keep
track of whether it is locked, embed the `lock_file` instance directly
in the `files_ref_store` struct and use the new
`is_lock_file_locked()` function to keep track of whether it is
locked. This keeps related data together and makes the main reference
store less of a special case.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Move the `lock` member from `packed_ref_cache` to `files_ref_store`,
since at most one cache can have a locked "packed-refs" file
associated with it. Rename it to `packed_refs_lock` to make its
purpose clearer in its new home. More changes are coming here shortly.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Just because the files backend can't retain reflogs for deleted
references is no reason that they shouldn't be supported by the
virtual method interface. Also, `delete_ref()` and `refs_delete_ref()`
have already gained `msg` parameters. Now let's add them to
`delete_refs()` and `refs_delete_refs()`.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Eliminate any chance of integer overflow on platforms where the two
types have different sizes.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The `trim` parameter can be set independently of `prefix`. So if some
caller were to set `trim` to be greater than `strlen(prefix)`, we
could end up pointing the `refname` field of the iterator past the NUL
of the actual reference name string.
That can't happen currently, because `trim` is always set either to
zero or to `strlen(prefix)`. But even the latter could lead to
confusion, if a refname is exactly equal to the prefix, because then
we would set the outgoing `refname` to the empty string.
And we're about to decouple the `prefix` and `trim` arguments even
more, so let's be cautious here. Report a bug if ever asked to trim a
reference whose name is not longer than `trim`.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The former is by far more common in our codebase.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The iterator returned by this function only includes references whose
names start with the whole prefix, not all of those in
`find_containing_dir(prefix)` as the old docstring claimed. This
docstring was probably copy-pasted from old ref-cache code, which had
the old specification. But now, `cache_ref_iterator_begin()`
(from which the files reference iterator gets its values)
automatically wraps its output using `prefix_ref_iterator_begin()`
when necessary, so it has the stricter behavior.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* bc/object-id: (53 commits)
object: convert parse_object* to take struct object_id
tree: convert parse_tree_indirect to struct object_id
sequencer: convert do_recursive_merge to struct object_id
diff-lib: convert do_diff_cache to struct object_id
builtin/ls-tree: convert to struct object_id
merge: convert checkout_fast_forward to struct object_id
sequencer: convert fast_forward_to to struct object_id
builtin/ls-files: convert overlay_tree_on_cache to object_id
builtin/read-tree: convert to struct object_id
sha1_name: convert internals of peel_onion to object_id
upload-pack: convert remaining parse_object callers to object_id
revision: convert remaining parse_object callers to object_id
revision: rename add_pending_sha1 to add_pending_oid
http-push: convert process_ls_object and descendants to object_id
refs/files-backend: convert many internals to struct object_id
refs: convert struct ref_update to use struct object_id
ref-filter: convert some static functions to struct object_id
Convert struct ref_array_item to struct object_id
Convert the verify_pack callback to struct object_id
Convert lookup_tag to struct object_id
...
Some platforms have ulong that is smaller than time_t, and our
historical use of ulong for timestamp would mean they cannot
represent some timestamp that the platform allows. Invent a
separate and dedicated timestamp_t (so that we can distingiuish
timestamps and a vanilla ulongs, which along is already a good
move), and then declare uintmax_t is the type to be used as the
timestamp_t.
* js/larger-timestamps:
archive-tar: fix a sparse 'constant too large' warning
use uintmax_t for timestamps
date.c: abort if the system time cannot handle one of our timestamps
timestamp_t: a new data type for timestamps
PRItime: introduce a new "printf format" for timestamps
parse_timestamp(): specify explicitly where we parse timestamps
t0006 & t5000: skip "far in the future" test when time_t is too limited
t0006 & t5000: prepare for 64-bit timestamps
ref-filter: avoid using `unsigned long` for catch-all data type
"git gc" did not interact well with "git worktree"-managed
per-worktree refs.
* nd/worktree-kill-parse-ref:
refs: kill set_worktree_head_symref()
worktree.c: kill parse_ref() in favor of refs_resolve_ref_unsafe()
refs: introduce get_worktree_ref_store()
refs: add REFS_STORE_ALL_CAPS
refs.c: make submodule ref store hashmap generic
environment.c: fix potential segfault by get_git_common_dir()
Convert many of the internals of the files backend to use struct
object_id. Avoid converting public APIs (except one change to
refs/ref-cache.c) to limit the scope of the changes.
Convert one use of get_sha1_hex to parse_oid_hex, and rely on the fact
that a strbuf will be NUL-terminated and that parse_oid_hex will fail on
truncated input to avoid the need to check the length.
This is a requirement to convert parse_object later on.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Convert struct ref_array_item to use struct object_id by changing the
definition and applying the following semantic patch, plus the standard
object_id transforms:
@@
struct ref_update E1;
@@
- E1.new_sha1
+ E1.new_oid.hash
@@
struct ref_update *E1;
@@
- E1->new_sha1
+ E1->new_oid.hash
@@
struct ref_update E1;
@@
- E1.old_sha1
+ E1.old_oid.hash
@@
struct ref_update *E1;
@@
- E1->old_sha1
+ E1->old_oid.hash
This transformation allows us to convert write_ref_to_lockfile, which is
required to convert parse_object.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Adjust the callback functions to take struct object_id * instead of
unsigned char *, and modify related static functions accordingly.
Introduce a temporary object_id instance into files_reflog_expire and
copy the SHA-1 value passed in. This is necessary because the sha1
parameter can come indirectly from get_sha1. Without the temporary, it
would require much more refactoring to be able to convert this function.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Git's source code assumes that unsigned long is at least as precise as
time_t. Which is incorrect, and causes a lot of problems, in particular
where unsigned long is only 32-bit (notably on Windows, even in 64-bit
versions).
So let's just use a more appropriate data type instead. In preparation
for this, we introduce the new `timestamp_t` data type.
By necessity, this is a very, very large patch, as it has to replace all
timestamps' data type in one go.
As we will use a data type that is not necessarily identical to `time_t`,
we need to be very careful to use `time_t` whenever we interact with the
system functions, and `timestamp_t` everywhere else.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The internals of the refs API around the cached refs has been
streamlined.
* mh/separate-ref-cache:
do_for_each_entry_in_dir(): delete function
files_pack_refs(): use reference iteration
commit_packed_refs(): use reference iteration
cache_ref_iterator_begin(): make function smarter
get_loose_ref_cache(): new function
get_loose_ref_dir(): function renamed from get_loose_refs()
do_for_each_entry_in_dir(): eliminate `offset` argument
refs: handle "refs/bisect/" in `loose_fill_ref_dir()`
ref-cache: use a callback function to fill the cache
refs: record the ref_store in ref_cache, not ref_dir
ref-cache: introduce a new type, ref_cache
refs: split `ref_cache` code into separate files
ref-cache: rename `remove_entry()` to `remove_entry_from_dir()`
ref-cache: rename `find_ref()` to `find_ref_entry()`
ref-cache: rename `add_ref()` to `add_ref_entry()`
refs_verify_refname_available(): use function in more places
refs_verify_refname_available(): implement once for all backends
refs_ref_iterator_begin(): new function
refs_read_raw_ref(): new function
get_ref_dir(): don't call read_loose_refs() for "refs/bisect"
70999e9cec (branch -m: update all per-worktree HEADs - 2016-03-27)
added this function in order to update HEADs of all relevant
worktrees, when a branch is renamed.
It, as a public ref api, kind of breaks abstraction when it uses
internal functions of files backend. With the introduction of
refs_create_symref(), we can move back pretty close to the code before
70999e9cec, where create_symref() was used for updating HEAD.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Various small fixes.
* rs/misc-cppcheck-fixes:
server-info: avoid calling fclose(3) twice in update_info_file()
files_for_each_reflog_ent_reverse(): close stream and free strbuf on error
am: close stream on error, but not stdin
Currently, Git's source code treats all timestamps as if they were
unsigned longs. Therefore, it is okay to write "%lu" when printing them.
There is a substantial problem with that, though: at least on Windows,
time_t is *larger* than unsigned long, and hence we will want to switch
away from the ill-specified `unsigned long` data type.
So let's introduce the pseudo format "PRItime" (currently simply being
defined to "lu") to make it easier to change the data type used for
timestamps.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Currently, Git's source code represents all timestamps as `unsigned
long`. In preparation for using a more appropriate data type, let's
introduce a symbol `parse_timestamp` (currently being defined to
`strtoul`) where appropriate, so that we can later easily switch to,
say, use `strtoull()` instead.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Exit the loop orderly through the cleanup code, instead of dashing out
with logfp still open and sb leaking.
Found with Cppcheck.
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Reviewed-by: Jeff King <peff@peff.net>
Reviewed-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Use reference iteration rather than `do_for_each_entry_in_dir()` in
the definition of `files_pack_refs()`. This makes the code shorter and
easier to follow, because the logic can be inline rather than spread
between the main function and a callback function, and it removes the
need to use `pack_refs_cb_data` to preserve intermediate state.
This removes the last callers of `entry_resolves_to_object()` and
`get_loose_ref_dir()`, so delete those functions.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Use reference iteration rather than do_for_each_entry_in_dir() in the
definition of commit_packed_refs().
Note that an internal consistency check that was previously done in
`write_packed_entry_fn()` is not there anymore. This is actually an
improvement:
The old error message was emitted when there is an entry in the
packed-ref cache that is not `REF_KNOWS_PEELED`, and when we attempted
to peel the reference, the result was `PEEL_INVALID`,
`PEEL_IS_SYMREF`, or `PEEL_BROKEN`. Since a packed ref cannot be a
symref, `PEEL_IS_SYMREF` and `PEEL_BROKEN` can be ruled out. So we're
left with `PEEL_INVALID`.
An entry without `REF_KNOWS_PEELED` can get into the packed-refs cache
in the following two ways:
* The reference was read from a `packed-refs` file that didn't have
the `fully-peeled` attribute. In that case, we *don't want* to emit
an error, because the broken value is presumably a stale value of
the reference that is now masked by a loose version of the same
reference (which we just don't happen to be packing this time). This
is a perfectly legitimate situation and doesn't indicate that the
repository is corrupt. The old code incorrectly emits an error
message in this case. (It was probably never reported as a bug
because this scenario is rare.)
* The reference was a loose reference that was just added to the
packed ref cache by `files_packed_refs()` via
`pack_if_possible_fn()` in preparation for being packed. The latter
function refuses to pack a reference for which
`entry_resolves_to_object()` returns false, and otherwise calls
`peel_entry()` itself and checks the return value. So an entry added
this way should always have `REF_KNOWS_PEELED` and shouldn't trigger
the error message in either the old code or the new.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Change `cache_ref_iterator_begin()` to take two new arguments:
* `prefix` -- to iterate only over references with the specified
prefix.
* `prime_dir` -- to "prime" (i.e., pre-load) the cache before starting
the iteration.
The new functionality makes it possible for
`files_ref_iterator_begin()` to be made more ignorant of the internals
of `ref_cache`, and `find_containing_dir()` and `prime_ref_dir()` to
be made private.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Extract a new function, `get_loose_ref_cache()`, from
get_loose_ref_dir(). The function returns the `ref_cache` for the
loose refs of a `files_ref_store`.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The new name is more analogous to `get_packed_ref_dir()`.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
That "refs/bisect/" has to be handled specially when filling the
ref_cache for loose references is a peculiarity of the files backend,
and the ref-cache code shouldn't need to know about it. So move this
code to the callback function, `loose_fill_ref_dir()`.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>