On Mon, Aug 07 2023 at 14:18, Peter Zijlstra wrote: > /** > * futex_hash - Return the hash bucket in the global hash > * @key: Pointer to the futex key for which the hash is calculated > @@ -114,10 +137,29 @@ late_initcall(fail_futex_debugfs); > */ > struct futex_hash_bucket *futex_hash(union futex_key *key) > { > - u32 hash = jhash2((u32 *)key, offsetof(typeof(*key), both.offset) / 4, > + u32 hash = jhash2((u32 *)key, > + offsetof(typeof(*key), both.offset) / sizeof(u32), > key->both.offset); > + int node = key->both.node; > + > + if (node == -1) { NUMA_NO_NODE please all over the place. > + /* > + * In case of !FLAGS_NUMA, use some unused hash bits to pick a > + * node -- this ensures regular futexes are interleaved across > + * the nodes and avoids having to allocate multiple > + * hash-tables. > + * > + * NOTE: this isn't perfectly uniform, but it is fast and > + * handles sparse node masks. > + */ > + node = (hash >> futex_hashshift) % nr_node_ids; > + if (!node_possible(node)) { > + node = find_next_bit_wrap(node_possible_map.bits, > + nr_node_ids, node); > + } Smart. > > +static inline unsigned int futex_size(unsigned int flags) > +{ > + return 1 << (flags & FLAGS_SIZE_MASK); > +} > + > static inline bool futex_flags_valid(unsigned int flags) If you reorder these two functions in the patch which introduces them, this diff gets readable :) Aside of that this thing is really hard to review :)