2024年6月7日(金) 2:46 Yosry Ahmed <yosryahmed@xxxxxxxxxx>: > > There are a lot of magic numbers in this patch, and it seems like it's > all based on theory. I don't object to making the number of zpools > dynamic in some way, but unless we do it in a data-driven way where we > understand the implications, I think the added complexity and > inconsistency is not justified. > > For example, 2*CPU zpools is an overkill and will cause a lot of > fragmentation. We use 32 zpools right now for machines with 100s of > CPUs. I know that you are keeping 32 as the limit, but why 2*CPUs if > nr_cpus <= 16? > > Also, the limitation based on memory size assumes that zsmalloc is the > only allocator used by zswap, which is unfortunately not the case. > > The current implementation using 32 zpools all the time is not > perfect, and I did write a patch to make it at least be min(nr_cpus, > 32), but it is simple and it works. Complexity should be justified. > Thanks for your comments. I agree the 2*cpu is too much. it was conservatively chosen assuming 1/2 contention while all cores are accessing zswap. Much smaller factor or non-linear scale as your comments in the main thread would be better. I found your patch from the main thread. One point I'm afraid, this hashing will fail if nr_zswap_zpools is 1 or is not rounded to order of 2. hash_ptr crashes when bit is 0. > + return entry->pool->zpools[hash_ptr(entry, ilog2(nr_zswap_zpools))];