On (24/06/06 10:49), Chengming Zhou wrote: > > Thanks for trying this out. This is interesting, so even two zpools is > > too much fragmentation for your use case. > > > > I think there are multiple ways to go forward here: > > (a) Make the number of zpools a config option, leave the default as > > 32, but allow special use cases to set it to 1 or similar. This is > > probably not preferable because it is not clear to users how to set > > it, but the idea is that no one will have to set it except special use > > cases such as Erhard's (who will want to set it to 1 in this case). > > > > (b) Make the number of zpools scale linearly with the number of CPUs. > > Maybe something like nr_cpus/4 or nr_cpus/8. The problem with this > > approach is that with a large number of CPUs, too many zpools will > > start having diminishing returns. Fragmentation will keep increasing, > > while the scalability/concurrency gains will diminish. > > > > (c) Make the number of zpools scale logarithmically with the number of > > CPUs. Maybe something like 4log2(nr_cpus). This will keep the number > > of zpools from increasing too much and close to the status quo. The > > problem is that at a small number of CPUs (e.g. 2), 4log2(nr_cpus) > > will actually give a nr_zpools > nr_cpus. So we will need to come up > > with a more fancy magic equation (e.g. 4log2(nr_cpus/4)). > > > > (d) Make the number of zpools scale linearly with memory. This makes > > more sense than scaling with CPUs because increasing the number of > > zpools increases fragmentation, so it makes sense to limit it by the > > available memory. This is also more consistent with other magic > > numbers we have (e.g. SWAP_ADDRESS_SPACE_SHIFT). > > > > The problem is that unlike zswap trees, the zswap pool is not > > connected to the swapfile size, so we don't have an indication for how > > much memory will be in the zswap pool. We can scale the number of > > zpools with the entire memory on the machine during boot, but this > > seems like it would be difficult to figure out, and will not take into > > consideration memory hotplugging and the zswap global limit changing. > > > > (e) A creative mix of the above. > > > > (f) Something else (probably simpler). > > > > I am personally leaning toward (c), but I want to hear the opinions of > > other people here. Yu, Vlastimil, Johannes, Nhat? Anyone else? > > > > In the long-term, I think we may want to address the lock contention > > in zsmalloc itself instead of zswap spawning multiple zpools. Sorry, I'm sure I'm not following this discussion closely enough, has the lock contention been demonstrated/proved somehow? lock-stats? > Agree, I think we should try to improve locking scalability of zsmalloc. > I have some thoughts to share, no code or test data yet: > > 1. First, we can change the pool global lock to per-class lock, which > is more fine-grained. Commit c0547d0b6a4b6 "zsmalloc: consolidate zs_pool's migrate_lock and size_class's locks" [1] claimed no significant difference between class->lock and pool->lock. [1] https://lkml.kernel.org/r/20221128191616.1261026-4-nphamcs@xxxxxxxxx