Re: [PATCH RFC 2/2] zram: support compression at the granularity of multi-pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 11, 2024 at 4:14 PM Sergey Senozhatsky
<senozhatsky@xxxxxxxxxxxx> wrote:
>
> On (24/04/11 14:03), Barry Song wrote:
> > > [..]
> > >
> > > > +static int zram_bvec_write_multi_pages_partial(struct zram *zram, struct bio_vec *bvec,
> > > > +                                u32 index, int offset, struct bio *bio)
> > > > +{
> > > > +     struct page *page = alloc_pages(GFP_NOIO | __GFP_COMP, ZCOMP_MULTI_PAGES_ORDER);
> > > > +     int ret;
> > > > +     void *src, *dst;
> > > > +
> > > > +     if (!page)
> > > > +             return -ENOMEM;
> > > > +
> > > > +     ret = zram_read_multi_pages(zram, page, index, bio);
> > > > +     if (!ret) {
> > > > +             src = kmap_local_page(bvec->bv_page);
> > > > +             dst = kmap_local_page(page);
> > > > +             memcpy(dst + offset, src + bvec->bv_offset, bvec->bv_len);
> > > > +             kunmap_local(dst);
> > > > +             kunmap_local(src);
> > > > +
> > > > +             atomic64_inc(&zram->stats.zram_bio_write_multi_pages_partial_count);
> > > > +             ret = zram_write_page(zram, page, index);
> > > > +     }
> > > > +     __free_pages(page, ZCOMP_MULTI_PAGES_ORDER);
> > > > +     return ret;
> > > > +}
> > >
> > > What type of testing you run on it? How often do you see partial
> > > reads and writes? Because this looks concerning - zsmalloc memory
> > > usage reduction is one metrics, but this also can be achieved via
> > > recompression, writeback, or even a different compression algorithm,
> > > but higher CPU/power usage/higher requirements for physically contig
> > > pages cannot be offset easily. (Another corner case, assume we have
> > > partial read requests on every CPU simultaneously.)
> >
> > This question brings up an interesting observation. In our actual product,
> > we've noticed a success rate of over 90% when allocating large folios in
> > do_swap_page, but occasionally, we encounter failures. In such cases,
> > instead of resorting to partial reads, we opt to allocate 16 small folios and
> > request zram to fill them all. This strategy effectively minimizes partial reads
> > to nearly zero. However, integrating this into the upstream codebase seems
> > like a considerable task, and for now, it remains part of our
> > out-of-tree code[1],
> > which is also open-source.
> > We're gradually sending patches for the swap-in process, systematically
> > cleaning up the product's code.
>
> I see, thanks for explanation.
> Does this sound like this series is ahead of its time?

I feel it is necessary to present the whole picture together with large folios
swp-in series[1]. On the other hand, there is a possibility this can
land earlier
before everything is really with default "disable", but for those
platforms which
have finely tuned partial read/write, they can enable it.

[1] https://lore.kernel.org/linux-mm/20240304081348.197341-1-21cnbao@xxxxxxxxx/

>
> > To enhance the success rate of large folio allocation, we've reserved some
> > page blocks for mTHP. This approach is currently absent from the mainline
> > codebase as well (Yu Zhao is trying to provide TAO [2]). Consequently, we
> > anticipate that partial reads may reach 50% or more until this method is
> > incorporated upstream.
>
> These partial reads/writes are difficult to justify - instead of doing
> comp_op(PAGE_SIZE) we, in the worst case, now can do ZCOMP_MULTI_PAGES_NR
> of comp_op(ZCOMP_MULTI_PAGES_ORDER) (assuming a access pattern that
> touches each of multi-pages individually). That is a potentially huge
> increase in CPU/power usage, which cannot be easily sacrificed. In fact,
> I'd probably say that power usage is more important here than zspool
> memory usage (that we have means to deal with).

Once Ryan's mTHP swapout without splitting [2] is integrated into the
mainline, this
patchset certainly gains an advantage for SWPOUT. However, for SWPIN,
the situation
is more nuanced. There's a risk of failing to allocate mTHP, which
could result in the
allocation of a small folio instead. In such cases, decompressing a
large folio but
copying only one subpage leads to inefficiency.

In real-world products, we've addressed this challenge in two ways:
1. We've enhanced reserved page blocks for mTHP to boost allocation
success rates.
2. In instances where we fail to allocate a large folio, we fall back
to allocating nr_pages
small folios instead of just one. so we still only decompress once for
multi-pages.

With these measures in place, we consistently achieve wins in both
power consumption and
memory savings. However, it's important to note that these
optimizations are specific to our
product, and there's still much work needed to upstream them all.

[2] https://lore.kernel.org/linux-mm/20240408183946.2991168-1-ryan.roberts@xxxxxxx/

>
> Have you evaluated power usage?
>
> I also wonder if it brings down the number of ZRAM_SAME pages. Suppose
> when several pages out of ZCOMP_MULTI_PAGES_ORDER are filled with zeroes
> (or some other recognizable pattern) which previously would have been
> stored using just unsigned long. Makes me even wonder if ZRAM_SAME test
> makes sense on multi-page at all, for that matter.

I don't think we need to worry about ZRAM_SAME. ARM64 supports 4KB, 16KB, and
64KB base pages. Even if we configure the base page to 16KB or 64KB,
there's still
a possibility of missing out on identifying SAME PAGES that are
identical at the 4KB
level but not at the 16/64KB granularity.

In our product, we continue to observe many SAME PAGES using
multi-page mechanisms.
Even if we miss some opportunities to identify same pages at the 4KB
level, the compressed
data remains relatively small, though not as compact as SAME_PAGE.
Overall, in typical
12GiB/16GiB phones, we still achieve a memory saving of around 800MiB
by this patchset.

mTHP offers a means to emulate a 16KiB/64KiB base page while
maintaining software
compatibility with a 4KiB base page. The primary concern here lies in
partial read/write
operations. In our product, we've successfully addressed these issues. However,
convincing people in the mainline community may take considerable time
and effort :-)

Thanks
Barry





[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux