Re: Why n_buckets is at least max_entries?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/15/20 10:44 PM, Cong Wang wrote:
Hello,

Any reason why we allocate at least max_entries of buckets of a hash map?

  466
  467         /* hash table size must be power of 2 */
  468         htab->n_buckets = roundup_pow_of_two(htab->map.max_entries);

This looks very odd to me, as I never see any other hash table
implementation like this. I _guess_ we try to make it a perfect hash?
But in reality no hash is perfect, so it is impossible to have no
conflicts, that is, there is almost always a bucket that has multiple
elements.

Or is it because other special maps (LRU?) require this? I can not
immediately see it from htab_map_alloc().

hash/LRU map is optimized for lookup speed, so __select_bucket() makes use
of the fact that it's power of 2.

Thanks,
Daniel



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux