Re: [PATCH v3 3/5] sha1_name: Unroll len loop in find_unique_abbrev_r

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/3/2017 6:49 AM, Junio C Hamano wrote:
Derrick Stolee <dstolee@xxxxxxxxxxxxx> writes:

p0008.1: find_unique_abbrev() for existing objects
--------------------------------------------------

For 10 repeated tests, each checking 100,000 known objects, we find the
following results when running in a Linux VM:

|       | Pack  | Packed  | Loose  | Base   | New    |         |
| Repo  | Files | Objects | Objects| Time   | Time   | Rel%    |
|-------|-------|---------|--------|--------|--------|---------|
| Git   |     1 |  230078 |      0 | 0.09 s | 0.06 s | - 33.3% |
| Git   |     5 |  230162 |      0 | 0.11 s | 0.08 s | - 27.3% |
| Git   |     4 |  154310 |  75852 | 0.09 s | 0.07 s | - 22.2% |
| Linux |     1 | 5606645 |      0 | 0.12 s | 0.32 s | +146.2% |
| Linux |    24 | 5606645 |      0 | 1.12 s | 1.12 s | -  0.9% |
| Linux |    23 | 5283204 | 323441 | 1.08 s | 1.05 s | -  2.8% |
| VSTS  |     1 | 4355923 |      0 | 0.12 s | 0.23 s | + 91.7% |
| VSTS  |    32 | 4355923 |      0 | 1.02 s | 1.08 s | +  5.9% |
| VSTS  |    31 | 4276829 |  79094 | 2.25 s | 2.08 s | -  7.6% |
The above does not look so good, especially in cases where a
repository is well maintained by packing into smaller number of
packs, we get much worse result?
I understand that this patch on its own does not have good numbers. I split the
patches 3 and 4 specifically to highlight two distinct changes:

Patch 3: Unroll the len loop that may inspect all files multiple times.
Patch 4: Parse less while disambiguating.

Patch 4 more than makes up for the performance hits in this patch.

p0008.2: find_unique_abbrev() for missing objects
-------------------------------------------------

For 10 repeated tests, each checking 100,000 missing objects, we find
the following results when running in a Linux VM:

|       | Pack  | Packed  | Loose  | Base   | New    |        |
| Repo  | Files | Objects | Objects| Time   | Time   | Rel%   |
|-------|-------|---------|--------|--------|--------|--------|
| Git   |     1 |  230078 |      0 | 0.66 s | 0.08 s | -87.9% |
| Git   |     5 |  230162 |      0 | 0.90 s | 0.13 s | -85.6% |
| Git   |     4 |  154310 |  75852 | 0.79 s | 0.10 s | -87.3% |
| Linux |     1 | 5606645 |      0 | 0.48 s | 0.32 s | -33.3% |
| Linux |    24 | 5606645 |      0 | 4.41 s | 1.09 s | -75.3% |
| Linux |    23 | 5283204 | 323441 | 4.11 s | 0.99 s | -75.9% |
| VSTS  |     1 | 4355923 |      0 | 0.46 s | 0.25 s | -45.7% |
| VSTS  |    32 | 4355923 |      0 | 5.40 s | 1.15 s | -78.7% |
| VSTS  |    31 | 4276829 |  79094 | 5.88 s | 1.18 s | -79.9% |
The question is if this is even measuring a relevant workload.  How
often would we have a full 40-hex object name for which we actually
do not have the object, and ask its name to be abbreviated?

Compared to it, the result from p0008.1 feels a lot more important.
We know we make tons of "abbreviate the object name for this object
we have" and we see them every day in our "git log -p" output.

Seeing a lot more impressive result from p0008.2 than p0008.1 makes
me unsure if this patch is optimizing for the right case.

I'll have to see the code a bit deeper before I can comment on it.

Thanks.
I agree that p0008.1 is more important. p0008.2 covers an important case of the
previous implementation. The line

    exists = has_sha1_file(sha1);

will inspect all packfiles and scan the full loose-object directory that would contain the object. The only reason for this call is to determine how to inspect the result of get_short_oid(), but is a significant portion of the time that is gained here.

Thanks,
-Stolee



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux