Re: Name hashing function causing a perf regression

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Sep 12, 2014 at 2:25 PM, Josef Bacik <jbacik@xxxxxx> wrote:
>
> Ok I have a good direction to take this in, so far the best results have
> been to change fold_hash() to hash_64(hash, 32) and change d_hash to do this
>
> static int *d_hash(int *table, unsigned long parent, unsigned int hash)
> {
>         hash += (unsigned long) parent / L1_CACHE_BYTES;
>         hash += hash_32(hash, d_hash_shift);
>         return table + (hash & d_hash_mask);
> }
>

It really should be sufficient to just do

  static int *d_hash(int *table, unsigned long parent, unsigned int hash)
  {
        hash += (unsigned long) parent / L1_CACHE_BYTES;
        return table + hash_32(hash, d_hash_shift);
  }

because the hash_32() function should already reduce the hash to its
second argument (d_hash_shift) and mix around the bits sufficiently.

I'm a *bit* nervous about the cost of this all, especially on CPU's
where integer multiplies are expensive, but obviously we need to
improve on the final hashing.

Just out of interest, how good/bad does the hash look if the *only*
change you do is the above d_hash() thing (ie just leave the
fold_hash() thing alone?)

            Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux