Re: [PATCH v3 0/5] Optimization batch 13: partial clone optimizations for merge-ort

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6/22/2021 4:04 AM, Elijah Newren via GitGitGadget wrote:
> This series optimizes blob downloading in merges for partial clones. It can
> apply on master. It's independent of ort-perf-batch-12.

As promised, I completed a performance evaluation of this series as well
as ort-perf-batch-12 (and all earlier batches) against our microsoft/git
fork and running in one of our large monorepos that has over 2 million
files at HEAD. Here are my findings.

In my comparisons, I compare the recursive merge strategy with renames
disabled against the ORT strategy (which always has renames enabled). When
I enabled renames for the recursive merge I saw the partial clone logic
kick in and start downloading many files in every case, so I dropped that
from consideration.


Experiment #1: Large RI/FI merges
---------------------------------

Most of the merge commits in this repo's history are merges of several
long-lived branches as code is merged across organizational boundaries. I
focused on the merge commits in the first-parent history, which mean these
are the merges that brought the latest changes from several areas into the
canonical version of the software.

These merges are all automated merges created by libgit2's implementation
of the recursive merge strategy. Since they are created on the server,
these will not have any merge conflicts.

They are still interesting because the sheer number of files that change
can be large. This is a pain point for the recursive merge because many
index entries need to update with the merge. For ORT, some of the updates
are simple because only one side changed a certain subtree (the
organizational boundaries also correspond to the directory structure in
many cases).

Across these merges I tested, ORT was _always_ faster and was consistent
with the recursive strategy. Even more interesting was the fact that the
recursive strategy had very slow outliers while the ORT strategy was much
more consistent:

     Recursive     ORT
-----------------------
MAX     34.97s    4.74s
P90     30.04s    4.50s
P75     15.35s    3.74s
P50      7.22s    3.39s
P10      3.61s    3.08s

(I'm not testing ORT with the sparse-index yet. A significant portion of
this 3 second lower bound is due to reading and writing the index file
with 2 million entries. I _am_ using sparse-checkout with only the files
at root, which minimizes the time required to update the working directory
with any changed files.)

For these merges, ORT is a clear win.


Experiment #2: User-created merges
----------------------------------

To find merges that might be created by actual user cases, I ran
'git rev-list --grep="^Merge branch"' to get merges that had default
messages from 'git merge' runs. (The merges from Experiment #1 had other
automated names that did not appear in this search.)

Here, the differences are less striking, but still valuable:

     Recursive     ORT
-----------------------
MAX     10.61s   6.27s
P75      8.81s   3.92s
P50      4.32s   3.21s
P10      3.53s   2.95s

The ORT strategy had more variance in these examples, though still not as
much as the recursive strategy. Here the variance is due to conflicting
files needing content merges, which usually were automatically resolved.

This version of the experiment provided interesting observations in a few
cases:

1. One case had the recursive merge strategy result in a root tree that
   disagreed with what the user committed, but the ORT strategy _did_ the
   correct resolution. Likely, this is due to the rename detection and
   resolution. The user probably had to manually resolve the merge to
   match their expected renames since we turn off merge.renames in their
   config.

2. I watched for the partial clone logic to kick in and download blobs.
   Some of these were inevitable: we need the blobs to resolve edit/edit
   conflicts. Most cases none were downloaded at all, so this series is
   working as advertised. There _was_ a case where the inexact rename
   detection requested a large list of files (~2900 in three batches) but
   _then_ said "inexact rename detection was skipped due to too many
   files". This is a case that would be nice to resolve in this series. I
   will try to find exactly where in the code this is being triggered and
   report back.

3. As I mentioned, I was using sparse-checkout to limit the size of the
   working directory. In one case of a conflict that could not be
   automatically resolved, the ORT strategy output this error:

   error: could not open '<X>': No such file or directory

   It seems we are looking for a file on disk without considering if it
   might have the SKIP_WORKTREE bit on in the index. I don't think this is
   an issue for this series, but might require a follow-up on top of the
   other ORT work.


Conclusions
-----------

I continue to be excited about the ORT strategy and will likely be
focusing on it in a month or so to integrate it with the sparse-index. I
think we would be interested in making the ORT strategy a new default for
Scalar, but we might really want it to respect merge.renames=false if only
so we can deploy the settings in stages (first, change the strategy, then
enable renames as an independent step) so we can isolate concerns.


Thanks!
-Stolee



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux