> From: Dan Magenheimer > Subject: zsmalloc limitations and related topics > > WORKLOAD ANALYSIS > : > 1) The average page compressed by almost a factor of six > (mean zsize == 694, stddev == 474) > 2) Almost eleven percent of the pages were zero pages. A > zero page compresses to 28 bytes. > 3) On average, 77% of the bytes (3156) in the pages-to-be- > compressed contained a byte-value of zero. > 4) Despite the above, mean density of zsmalloc was measured at > 3.2 zpages/pageframe, presumably losing nearly half of > available space to fragmentation. > > I have no clue if these measurements are representative > of a wide range of workloads over the lifetime of a booted > machine, but I am suspicious that they are not. For example, > the lzo1x compression algorithm claims to compress data by > about a factor of two. I realized that with a small hack in zswap, I could simulate the effect on zsmalloc of a workload with very different zsize distribution, one with a much higher mean, by simply doubling (and tripling) the zsize passed to zs_malloc. The results: Unchanged: mean=694 stddev=474 -> mean density = 3.2 Doubled: mean=1340 stddev=842 -> mean density = 1.9 Tripled: mean=1636 stddev=1031 -> mean density = 1.6 Note that even tripled, the mean of the simulated distribution is still much lower than PAGE_SIZE/2, which is roughly the published expected compression for lzo1x. So one would still expect a mean density greater than two but, apparently, one-third of available space is lost to fragmentation. Without a "representative" workload, I still have no clue as to whether this simulated distribution is relevant, but it is interesting to note that, for a workload with lower mean compressibility, zsmalloc's reputation as "high density" may be undeserved. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href