On (24/11/22 11:25), Barry Song wrote: > When large folios are compressed at a larger granularity, we observe > a notable reduction in CPU usage and a significant improvement in > compression ratios. > > This patchset enhances zsmalloc and zram by adding support for dividing > large folios into multi-page blocks, typically configured with a > 2-order granularity. Without this patchset, a large folio is always > divided into `nr_pages` 4KiB blocks. > > The granularity can be set using the `ZSMALLOC_MULTI_PAGES_ORDER` > setting, where the default of 2 allows all anonymous THP to benefit. I can't say that I'm in love with this part. Looking at zsmalloc stats, your new size-classes are significantly further apart from each other than our tradition size classes. For example, with ZSMALLOC_CHAIN_SIZE of 10, some size-classes are more than 400 (that's almost 10% of PAGE_SIZE) bytes apart // stripped 344 9792 348 10048 351 10240 353 10368 355 10496 361 10880 368 11328 370 11456 373 11648 377 11904 383 12288 387 12544 390 12736 395 13056 400 13376 404 13632 410 14016 415 14336 Which means that every objects of size, let's say, 10881 will go into 11328 size class and have 447 bytes of padding between each object. And with ZSMALLOC_CHAIN_SIZE of 8, it seems, we have even larger padding gaps: // stripped 348 10048 351 10240 353 10368 361 10880 370 11456 373 11648 377 11904 383 12288 390 12736 395 13056 404 13632 410 14016 415 14336 418 14528 447 16384 E.g. 13632 and 13056 are more than 500 bytes apart. > swap-out time(ms) 68711 49908 > swap-in time(ms) 30687 20685 > compression ratio 20.49% 16.9% These are not the only numbers to focus on, really important metrics are: zsmalloc pages-used and zsmalloc max-pages-used. Then we can calculate the pool memory usage ratio (the size of compressed data vs the number of pages zsmalloc pool allocated to keep them). More importantly, dealing with internal fragmentation in a size-class, let's say, of 14528 will be a little painful, as we'll need to move around 14K objects. As, for the speed part, well, it's a little unusual to see that you are focusing on zstd. zstd is slower than any from the lzX family, sort of a fact, zstsd sports better compression ratio, but is slower. Do you use zstd in your smartphones? If speed is your main metrics, another option might be to just use a faster algorithm and then utilize post-processing (re-compression with zstd or writeback) for memory savings? Do you happen to have some data (pool memory usage ratio, etc.) for lzo or lzo-rle, or lz4?