[PATCH] files-backend: cheapen refname_available check when locking refs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



When locking references in preparation for updating them, we need to
check that none of the newly added references D/F conflict with
existing references (e.g., we don't allow `refs/foo` to be added if
`refs/foo/bar` already exists, or vice versa).

Prior to 524a9fdb51 (refs_verify_refname_available(): use function in
more places, 2017-04-16), conflicts with existing loose references
were checked by looking directly in the filesystem, and then conflicts
with existing packed references were checked by running
`verify_refname_available_dir()` against the packed-refs cache.

But that commit changed the final check to call
`refs_verify_refname_available()` against the *whole* files ref-store,
including both loose and packed references, with the following
comment:

> This means that those callsites now check for conflicts with all
> references rather than just packed refs, but the performance cost
> shouldn't be significant (and will be regained later).

That comment turned out to be too sanguine. User s@xxxxxxxxxxxxx
reported that fetches involving a very large number of references in
neighboring directories were slowed down by that change.

The problem is that when fetching, each reference is updated
individually, within its own reference transaction. This is done
because some reference updates might succeed even though others fail.
But every time a reference update transaction is finished,
`clear_loose_ref_cache()` is called. So when it is time to update the
next reference, part of the loose ref cache has to be repopulated for
the `refs_verify_refname_available()` call. If the references are all
in neighboring directories, then the cost of repopulating the
reference cache increases with the number of references, resulting in
O(N²) effort.

The comment above also claims that the performance cost "will be
regained later". The idea was that once the packed-refs were finished
being split out into a separate ref-store, we could limit the
`refs_verify_refname_available()` call to the packed references again.
That is what we do now.

Signed-off-by: Michael Haggerty <mhagger@xxxxxxxxxxxx>
---
This patch applies on top of branch mh/packed-ref-store. It can also
be obtained from my fork [1] as branch "faster-refname-available-check".

I was testing this using the reporter's recipe (but fetching from a
local clone), and found the following surprising timing numbers:

b05855b5bc (before the slowdown): 22.7 s
524a9fdb51 (immediately after the slowdown): 13 minutes
4e81f1ecf1 (after this fix): 14.5 s

The fact that the fetch is now significantly *faster* than before the
slowdown seems not to have anything to do with the reference code.

 refs/files-backend.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/refs/files-backend.c b/refs/files-backend.c
index e9b95592b6..f2a420c611 100644
--- a/refs/files-backend.c
+++ b/refs/files-backend.c
@@ -631,11 +631,11 @@ static int lock_raw_ref(struct files_ref_store *refs,
 
 		/*
 		 * If the ref did not exist and we are creating it,
-		 * make sure there is no existing ref that conflicts
-		 * with refname:
+		 * make sure there is no existing packed ref that
+		 * conflicts with refname:
 		 */
 		if (refs_verify_refname_available(
-				    &refs->base, refname,
+				    refs->packed_ref_store, refname,
 				    extras, skip, err))
 			goto error_return;
 	}
@@ -938,7 +938,7 @@ static struct ref_lock *lock_ref_sha1_basic(struct files_ref_store *refs,
 	 * our refname.
 	 */
 	if (is_null_oid(&lock->old_oid) &&
-	    refs_verify_refname_available(&refs->base, refname,
+	    refs_verify_refname_available(refs->packed_ref_store, refname,
 					  extras, skip, err)) {
 		last_errno = ENOTDIR;
 		goto error_return;
-- 
2.11.0




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux