On 1/24/08, Andreas Ericsson <ae@xxxxxx> wrote: > Marko Kreen wrote: > > On 1/23/08, Andreas Ericsson <ae@xxxxxx> wrote: > >> Marko Kreen wrote: > >>> (not lookup3) because lookup3 beat easily all the "simple" hashes > >> By how much? FNV beat Linus' hash by 0.01 microseconds / insertion, > >> and 0.1 microsecons / lookup. We're talking about a case here where > >> there will never be more lookups than insertions (unless I'm much > >> mistaken). > > > > FNV is around 40% slower than lookup3 on my Intel Core CPU, on 4byte aligned > > input. See below for more detailed info. > > > > But the tests surely need to check for unaligned cases, as that's what > we're likely to hash, no? > >> It would have to be a significant improvement in wall-clock time > >> on a test-case of hashing 30k strings to warrant going from 6 to 80 > >> lines of code, imo. I still believe the original dumb hash Linus > >> wrote is "good enough". > > > > Well, ad-hoc dumb hashes may have horrible worst-cases that you cant > > see with light testing. Therefore I'd still suggest some better > > researched dumb hash (eg. FNV or OAT). > > > > True. FNV is used in both MySQL and PostgreSQL. I'd say it's safe to > assume it's fairly well tested. PostgreSQL uses lookup2... > > Here is my raw-speed test of different hashes. Input is 4-byte > > aligned which should be common case for malloc()-ed strings. > > Unless arena allocated, like we do in git. > > I'm not surprised that this test favours Jenkin's and Hsie's. > That's to be expected as those benefit far more than simpler > hashing algorithms for long strings. The overhead when trying > shorter strings (say, between 3 and 15 chars, and not necessarily > 4-byte aligned) sometimes make them quite a lot slower though. Ok, here is 0..15 chars, random alignment: Testing: seed=34 align=0 minlen=0 maxlen=15 trycnt=2 duration=10 lookup3 : try=0: ... 69.8092 MB/s lookup3 : try=1: ... 69.8146 MB/s own_memcpy: try=0: ... 66.7808 MB/s own_memcpy: try=1: ... 66.7814 MB/s memcpy_hack: try=0: ... 74.0635 MB/s memcpy_hack: try=1: ... 74.0518 MB/s lookup2 : try=0: ... 68.6582 MB/s lookup2 : try=1: ... 68.6634 MB/s fnv : try=0: ... 74.5098 MB/s fnv : try=1: ... 74.5283 MB/s hsieh : try=0: ... 71.6708 MB/s hsieh : try=1: ... 71.6814 MB/s oat : try=0: ... 74.7828 MB/s oat : try=1: ... 74.7716 MB/s elf : try=0: ... 65.2077 MB/s elf : try=1: ... 65.2128 MB/s Results compared to reference: lookup3 : 100.000 % own_memcpy : 95.659 % memcpy_hack : 106.082 % lookup2 : 98.351 % fnv : 106.743 % hsieh : 102.670 % oat : 107.112 % elf : 93.409 % > > This also is best case for original lookup3(), on unaligned > > input the memcpy variants beat it easily. Input string > > length varies randomly in range 0..64. > > > > Well, memcpy() isn't very interesting to compare against > hashes, as they test vastly different parts of the hardware's > parts' performance. memcpy() should also perform exactly the > same no matter what the test-data, which isn't always true for > hashes. Sorry, I meant my "simple-memcpy-based-lookup3". > What *would* be interesting would be something along the lines > of "duff_cpy()": ie, an unrolled loop that aligns itself and > copies each byte to the same address each time. How the hash fetched data from mempry is _very_ relevant. > Interesting output, but not very surprising. Do you have the code > available somewhere? I can put it out. -- marko - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html