Hi, On (22/11/10 14:44), Minchan Kim wrote: > On Mon, Oct 31, 2022 at 02:40:59PM +0900, Sergey Senozhatsky wrote: > > Hello, > > > > Some use-cases and/or data patterns may benefit from > > larger zspages. Currently the limit on the number of physical > > pages that are linked into a zspage is hardcoded to 4. Higher > > limit changes key characteristics of a number of the size > > classes, improving compactness of the pool and redusing the > > amount of memory zsmalloc pool uses. More on this in 0002 > > commit message. > > Hi Sergey, > > I think the idea that break of fixed subpages in zspage is > really good start to optimize further. However, I am worry > about introducing per-pool config this stage. How about > to introduce just one golden value for the zspage size? > order-3 or 4 in Kconfig with keeping default 2? Sorry, not sure I'm following. So you want a .config value for zspage limit? I really like the sysfs knob, because then one may set values on per-device basis (if they have multiple zram devices in a system with different data patterns): zram0 which is used as a swap device uses, say, 4 zram1 which is vfat block device uses, say, 6 zram2 which is ext4 block device uses, say, 8 The whole point of the series is that one single value does not fit all purposes. There is no silver bullet. > And then we make more efforts to have auto tune based on > the wasted memory and the number of size classes on the > fly. A good thing to be able to achieve is we have indirect > table(handle <-> zpage) so we could move the object anytime > so I think we could do better way in the end. It still needs to be per zram device (per zspool). sysfs knob doesn't stop us from having auto-tuned values in the future.