On 9/17/2017 5:51 PM, Junio C Hamano wrote:
Derrick Stolee <dstolee@xxxxxxxxxxxxx> writes:
+int cmd_main(int ac, const char **av)
+{
+ setup_git_directory();
As far as I recall, we do not (yet) allow declaration after
statement in our codebase. Move this down to make it after all
decls.
Will fix.
+
+ unsigned int hash_delt = 0x13579BDF;
+ unsigned int hash_base = 0x01020304;
+ struct object_id oid;
+
+ int i, count = 0;
+ int n = sizeof(struct object_id) / sizeof(int);
It probably is technically OK to assume sizeof(int) always equals to
sizeof(unsigned), but because you use 'n' _only_ to work with uint
and never with int, it would make more sense to match.
Will fix. I also notice that "n" should be const.
But I do not think we want this "clever" optimization that involves
'n' in the first place.
>>> + while (count++ < 100000) {
+ for (i = 0; i < n; i++)
+ ((unsigned int*)oid.hash)[i] = hash_base;
Does it make sense to assume that uint is always 4-byte (so this
code won't work if it is 8-byte on your platform) and doing this is
faster than using platform-optimized memcpy()?
I'm not sure what you mean by using memcpy to improve this, because
it would require calling memcpy in the inner loop, such as
for (i = 0; i < n; i++)
memcpy(oid.hash + i * sizeof(unsigned), &hash_base,
sizeof(unsigned));
I'm probably misunderstanding your intended use of memcpy().
+ find_unique_abbrev(oid.hash, MINIMUM_ABBREV);
+
+ hash_base += hash_delt;
+ }
I also wonder if this is measuring the right thing. I am guessing
that by making many queries for a unique abbreviation of "random"
(well, it is deterministic, but my point is these queries are not
based on the names of objects that exist in the repository) hashes,
this test measures how much time it takes for us to decide that such
a random hash does not exist. In the real life use, we make the
opposite query far more frequently: we have an object that we _know_
exists in the repository and we try to find a sufficient length to
disambiguate it from others, and I suspect that these two use
different logic. Don't you need to be measuring the time it takes
to compute the shortest abbreviation of an object that exists
instead?
First, this doesn't just measure the time it takes to determine non-
existence, because it finds the len required to disambiguate an
"incoming" hash from all known hashes. When testing, I put in a
simple printf to report the result abbreviation so I could see how
often it needed to be extended. In this sense, the test exposes the
while loop that is removed by PATCH 2/3.
Second, your criticism about extant hashes is valid, and one I
struggled to reconcile. I see two issues with testing known hashes:
1. By determining the hash exists, we have inspected the file that
contains it (either a .idx or the loose-object directory). This
has side-effects that warm up the file cache so the looped method
is artificially faster to find matching objects. The effect is
particularly significant on a repo with exactly one packfile.
2. If we iterate over the entire set of objects, this test takes
O(N*t(N)) time, where t(N) is the average time to compute a
minimum abbreviation. For large repos, this can take several
minutes. Instead, even with the constant 100,000 test hashes, we
have an O(t(N)) test. We could avoid the asymptotic growth by
limiting the number of existing hashes we use, but how do we
find a sufficiently uniform sample from them?
By looking at some other perf tests, I see that we can add a pre-
requisite action. I will investigate making another helper that
uniformly selects a set of hashes from the repo and writes them
to stdout in a random order. p0008-abbrev.sh will run the helper as
a prerequisite, saving the data to a file. test-abbrev will read
the hashes from stdin to test find_unique_abbrev. This should avoid
the side-effects I mentioned (test-abbrev will not warm up indexes)
while also testing abbreviation lengths for existing objects.
I'll get started on these changes and send a new patch with new perf
data in a couple days. Please let me know if there is a better way
to measure performance for this change.
Thanks,
-Stolee