On 2024/6/7 12:58, Yosry Ahmed wrote: > On Thu, Jun 6, 2024 at 6:01 PM Takero Funaki <flintglass@xxxxxxxxx> wrote: >> >> 2024年6月7日(金) 2:46 Yosry Ahmed <yosryahmed@xxxxxxxxxx>: >> >>> >>> There are a lot of magic numbers in this patch, and it seems like it's >>> all based on theory. I don't object to making the number of zpools >>> dynamic in some way, but unless we do it in a data-driven way where we >>> understand the implications, I think the added complexity and >>> inconsistency is not justified. >>> >>> For example, 2*CPU zpools is an overkill and will cause a lot of >>> fragmentation. We use 32 zpools right now for machines with 100s of >>> CPUs. I know that you are keeping 32 as the limit, but why 2*CPUs if >>> nr_cpus <= 16? >>> >>> Also, the limitation based on memory size assumes that zsmalloc is the >>> only allocator used by zswap, which is unfortunately not the case. >>> >>> The current implementation using 32 zpools all the time is not >>> perfect, and I did write a patch to make it at least be min(nr_cpus, >>> 32), but it is simple and it works. Complexity should be justified. >>> >> >> Thanks for your comments. >> I agree the 2*cpu is too much. it was conservatively chosen assuming >> 1/2 contention while all cores are accessing zswap. Much smaller >> factor or non-linear scale as your comments in the main thread would >> be better. > > Chengming is currently experimenting with fixing the lock contention > problem in zsmalloc by making the lock more granular. Based on the > data he finds, we may be able to just drop the multiple zpools patch > from zswap. Hope so, not sure now, will test and compare one pool with multiple pools. > > I'd wait for his findings before investing more into improving this. Ok, I will get back with code and some testing data a few days later. Thanks. > >> >> I found your patch from the main thread. >> One point I'm afraid, this hashing will fail if nr_zswap_zpools is 1 >> or is not rounded to order of 2. hash_ptr crashes when bit is 0. > > Yeah that patch was just for experimenting, I did not test it well. > Thanks for taking a look.