Re: [PATCH v9 6/7] mm: zswap: Support large folios in zswap_store().

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Sep 30, 2024 at 4:20 PM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote:
>
> On Mon, Sep 30, 2024 at 4:11 PM Nhat Pham <nphamcs@xxxxxxxxx> wrote:
>
> I suggested this in a previous version, and Kanchana faced some
> complexities implementing it:
> https://lore.kernel.org/lkml/SJ0PR11MB56785027ED6FCF673A84CEE6C96A2@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/

Sorry, I missed that conversation.

>
> Basically, if we batch get the refs after the store I think it's not
> safe, because once an entry is published to writeback it can be
> written back and freed, and a ref that we never acquired would be
> dropped.

Hmmm. I don't think writeback could touch any individual subpage just yet, no?

Before doing any work, zswap writeback would attempt to add the
subpage to the swap cache (via __read_swap_cache_async()). However,
all subpage will have already been added to swap cache, and point to
the (large) folio. So zswap_writeback_entry() should short circuit
here (the if (!page_allocated) case).

>
> Getting refs before the store would work, but then if the store fails
> at an arbitrary page, we need to only drop refs on the pool for pages
> that were not added to the tree, as the cleanup loop with
> zswap_entry_free() at the end of zswap_store() will drop the ref for
> those that were added to the tree.
>
> We agreed to (potentially) do the batching for refcounts as a followup.

But yeah no biggie. Not a dealbreaker for me tbh. I thought it was a
quick change (hence the fixlet suggestion), but if not then let's do
it as a follow-up.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux