On Mon, Sep 30, 2024 at 4:34 PM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote: > > On Mon, Sep 30, 2024 at 4:29 PM Nhat Pham <nphamcs@xxxxxxxxx> wrote: > > > > On Mon, Sep 30, 2024 at 4:20 PM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote: > > > > > > On Mon, Sep 30, 2024 at 4:11 PM Nhat Pham <nphamcs@xxxxxxxxx> wrote: > > > > > > I suggested this in a previous version, and Kanchana faced some > > > complexities implementing it: > > > https://lore.kernel.org/lkml/SJ0PR11MB56785027ED6FCF673A84CEE6C96A2@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/ > > > > Sorry, I missed that conversation. > > > > > > > > Basically, if we batch get the refs after the store I think it's not > > > safe, because once an entry is published to writeback it can be > > > written back and freed, and a ref that we never acquired would be > > > dropped. > > > > Hmmm. I don't think writeback could touch any individual subpage just yet, no? > > > > Before doing any work, zswap writeback would attempt to add the > > subpage to the swap cache (via __read_swap_cache_async()). However, > > all subpage will have already been added to swap cache, and point to > > the (large) folio. So zswap_writeback_entry() should short circuit > > here (the if (!page_allocated) case). > > If it's safe to take the refs after all calls to zswap_store_page() > are successful, then yeah that should be possible, for both the pool > and objcg. I didn't look closely though. > > Just to clarify, you mean grab one ref first, then do the > compressions, then grab the remaining refs, right? Ah yeah, that's what I meant. We can either perform one of the following sequences: grab 1 -> grab nr -> drop 1, or grab 1 -> grab nr - 1 if successful, drop 1 if failed. Seems straightforward to me, but yeah it seems a bit hair-splitting of me to die on this hill :) Just thought it was weird seeing the other parts batchified, and one part wasn't. The rest LGTM - I'll defer to you and Johannes for further review. Reviewed-by: Nhat Pham <nphamcs@xxxxxxxxx>