Re: [PATCH v7 6/8] mm: zswap: Support mTHP swapout in zswap_store().

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Sep 25, 2024 at 12:39:02PM -0700, Yosry Ahmed wrote:
> On Wed, Sep 25, 2024 at 12:20 PM Johannes Weiner <hannes@xxxxxxxxxxx> wrote:
> >
> > On Wed, Sep 25, 2024 at 11:30:34AM -0700, Yosry Ahmed wrote:
> > > Johannes wrote:
> > > > If this ever becomes an issue, we can handle it in a fastpath-slowpath
> > > > scheme: check the limit up front for fast-path failure if we're
> > > > already maxed out, just like now; then make obj_cgroup_charge_zswap()
> > > > atomically charge against zswap.max and unwind the store if we raced.
> > > >
> > > > For now, I would just keep the simple version we currently have: check
> > > > once in zswap_store() and then just go ahead for the whole folio.
> > >
> > > I am not totally against this but I feel like this is too optimistic.
> > > I think we can keep it simple-ish by maintaining an ewma for the
> > > compression ratio, we already have primitives for this (see
> > > DECLARE_EWMA).
> > >
> > > Then in zswap_store(), we can use the ewma to estimate the compressed
> > > size and use it to do the memcg and global limit checks once, like we
> > > do today. Instead of just checking if we are below the limits, we
> > > check if we have enough headroom for the estimated compressed size.
> > > Then we call zswap_store_page() to do the per-page stuff, then do
> > > batched charging and stats updates.
> >
> > I'm not sure what you gain from making a non-atomic check precise. You
> > can get a hundred threads determining down precisely that *their*
> > store will fit exactly into the last 800kB before the limit.
> 
> We just get to avoid overshooting in cases where we know we probably
> can't fit it anyway. If we have 4KB left and we are trying to compress
> a 2MB THP, for example. It just makes the upfront check to avoid
> pointless compression a little bit more meaningful.

I think I'm missing something. It's not just an upfront check, it's
the only check. The charge down the line doesn't limit anything, it
just counts. So if this check passes, we WILL store the folio. There
is no pointless compression.

We might overshoot the limit by about one folio in a single-threaded
scenario. But that is negligible in comparison to the overshoot we can
get due to race conditions.

Again, I see no no practical, meaningful difference in outcome by
making that limit check any more precise. Just keep it as-is.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux