On Tue, Mar 29, 2022 at 12:36 PM Trond Myklebust <trondmy@xxxxxxxxxxxxxxx> wrote: > > - Readdir fixes to improve cacheability of large directories when there > are multiple readers and writers. So I only took a look at this part now. I've obviously already pulled it, but that use of 'xxhash()' just made me go "Whaaa?" It's claimed that it's used because of its extreme performance, but the performance numbers come from using it as a block hash. That's not what nfs does. The nfs code just does xxhash(&cookie, sizeof(cookie), 0) & NFS_READDIR_COOKIE_MASK; where that "cookie" is just a 64-bit entity. And then it masks off everything but 18 bits. Is that *really* appropriate use of a new hash function? Why is this not just doing #include <hash.h> hash_64(cookie, 18); which is a lot more obvious than xxhash(). If there really are some serious problems with the perfectly standard hash() functionality, I think you should document them. Because just randomly picking xxhash() without explaining _why_ you can't just use the same simple thing we use elsewhere is very odd. Or rather, when the only documentation is "performance", then I think the regular "hash_64()" is the obvious and trivial choice. Linus