Re: [PATCH v2] index-pack: remove fetch_if_missing=0

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Junio C Hamano <gitster@xxxxxxxxx> writes:
> > Hence, use has_object() to check for the existence of an object, which
> > has the default behavior of not lazy-fetching in a partial clone. It is
> > worth mentioning that this is the only place where there is potential for
> > lazy-fetching and all other cases are properly handled, making it safe to
> > remove this global here.
> 
> This paragraph is very well explained.

It might be good if the "all other cases" were enumerated here in the
commit message (since the consequence of missing a case might be an
infinite loop of fetching).

> OK.  The comment describes the design choice we made to flip the
> fetch_if_missing flag off.  The old world-view was that we would
> notice a breakage by non-functioning index-pack when a lazy clone is
> missing objects that we need by disabling auto-fetching, and we
> instead explicitly handle any missing and necessary objects by lazy
> fetching (like "when we lack REF_DELTA bases").  It does sound like
> a conservative thing to do, compared to the opposite approach we are
> taking with this patch, i.e. we would not fail if we tried to access
> objects we do not need to, because we have lazy fetching enabled,
> and we just ended up with bloated object store nobody may notice.
> 
> To protect us from future breakage that can come from the new
> approach, it is a very good thing that you added new tests to ensure
> no unnecessary lazy fetching is done (I am not offhand sure if that
> test is sufficient, though).

I don't think the test is sufficient - I'll explain that below.

> > +test_expect_success 'index-pack does not lazy-fetch when checking for sha1 collsions' '
> > +	rm -rf server promisor-remote client repo trace &&
> > +
> > +	# setup
> > +	git init server &&
> > +	for i in 1 2 3 4
> > +	do
> > +		echo $i >server/file$i &&
> > +		git -C server add file$i &&
> > +		git -C server commit -am "Commit $i" || return 1
> > +	done &&
> > +	git -C server config --local uploadpack.allowFilter 1 &&
> > +	git -C server config --local uploadpack.allowAnySha1InWant 1 &&
> > +	HASH=$(git -C server hash-object file3) &&
> > +
> > +	git init promisor-remote &&
> > +	git -C promisor-remote fetch --keep "file://$(pwd)/server" &&
> > +
> > +	git clone --no-checkout --filter=blob:none "file://$(pwd)/server" client &&
> > +	git -C client remote set-url origin "file://$(pwd)/promisor-remote" &&
> > +	git -C client config extensions.partialClone 1 &&
> > +	git -C client config remote.origin.promisor 1 &&
> > +
> > +	git init repo &&
> > +	echo "5" >repo/file5 &&
> > +	git -C repo config --local uploadpack.allowFilter 1 &&
> > +	git -C repo config --local uploadpack.allowAnySha1InWant 1 &&

The file5 isn't committed?

> > +
> > +	# verify that no lazy-fetching is done when fetching from another repo
> > +	GIT_TRACE_PACKET="$(pwd)/trace" git -C client \
> > +					fetch --keep "file://$(pwd)/repo" main &&
> > +
> > +	! grep "want $HASH" trace
> > +'

It seems to me that this test clones a repo and then attempts to fetch
from another repo: so far, so good. But I don't think this tests what
we want: firstly, the file5 isn't committed, so it is never fetched. And
even if it was, we only check that file3 was never fetched from "$(pwd)/
server". But file3 has nothing to do with the subsequent fetch: we are
only fetching file5. It is the hash of file5 that we are checking for
collisions, and thus it is file5 that we want to verify is not fetched.

So I think the way to do this is to have 3 repositories like the author
is doing now (server, client, and repo), and do it as follows:
 - create "server", one commit will do
 - clone "server" into "client" (partial clone)
 - clone "server" into "another-remote" (not partial clone)
 - add a file ("new-file") to "server", commit it, and pull from "another-remote"
 - fetch from "another-remote" into "client"

This way, "client" will need to verify that the hash of "new-file" has
no collisions with any object it currently has. If there is no bug,
"new-file" will never be fetched from "server", and if there is a bug,
"new-file" will be fetched.

One problem is that if there is a bug, such a test will cause an
infinite loop (we fetch "new-file", so we want to check it for
collisions, and because of the bug, we fetch "new-file" again, which we
check for collisions, and so on) which might be problematic for things
like CI. But we might be able to treat timeouts as the same as test
failures, so this should be OK.



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux