Re: [PATCH RFC 4/4] UNFINISHED mm, fs: use kmem_cache_charge() in path_openat()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 12, 2024 at 10:22:54AM +0100, Vlastimil Babka wrote:
> On 3/1/24 19:53, Roman Gushchin wrote:
> > On Fri, Mar 01, 2024 at 09:51:18AM -0800, Linus Torvalds wrote:
> >> What I *think* I'd want for this case is
> >> 
> >>  (a) allow the accounting to go over by a bit
> >> 
> >>  (b) make sure there's a cheap way to ask (before) about "did we go
> >> over the limit"
> >> 
> >> IOW, the accounting never needed to be byte-accurate to begin with,
> >> and making it fail (cheaply and early) on the next file allocation is
> >> fine.
> >> 
> >> Just make it really cheap. Can we do that?
> >> 
> >> For example, maybe don't bother with the whole "bytes and pages"
> >> stuff. Just a simple "are we more than one page over?" kind of
> >> question. Without the 'stock_lock' mess for sub-page bytes etc
> >> 
> >> How would that look? Would it result in something that can be done
> >> cheaply without locking and atomics and without excessive pointer
> >> indirection through many levels of memcg data structures?
> > 
> > I think it's possible and I'm currently looking into batching charge,
> > objcg refcnt management and vmstats using per-task caching. It should
> > speed up things for the majority of allocations.
> > For allocations from an irq context and targeted allocations
> > (where the target memcg != memcg of the current task) we'd probably need to
> > keep the old scheme. I hope to post some patches relatively soon.
> 
> Do you think this will work on top of this series, i.e. patches 1+2 could be
> eventually put to slab/for-next after the merge window, or would it
> interfere with your changes?

Please, go on and merge them, I'll rebase on top of it, it will be even better
for my work. I made a couple of comments there, but overall they look very good
to me, thank you for doing this work!

> 
> > I tried to optimize the current implementation but failed to get any
> > significant gains. It seems that the overhead is very evenly spread across
> > objcg pointer access, charge management, objcg refcnt management and vmstats.

I started working on the thing, but it's a bit more complicated than I initially
thought because:
1) there are allocations made from a !in_task() context, so we need to handle
   this correctly

2) tasks can be moved between cgroups concurrently to memory allocations.
   fortunately my recent changes provide a path here, but it adds to the complexity.
   In alternative world where tasks can't move between cgroups the life would
   be so much better (and faster too, we could remove a ton of synchronization).

3) we do have per-numa-node per-memcg stats, which are less trivial to cache
   on struct task

I hope to resolve these issues somehow and post patches, but probably will need
a bit more time.

Thanks!




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [Monitors]

  Powered by Linux