On Fri, 1 Mar 2024 at 09:07, Vlastimil Babka <vbabka@xxxxxxx> wrote: > > This is just an example of using the kmem_cache_charge() API. I think > it's placed in a place that's applicable for Linus's example [1] > although he mentions do_dentry_open() - I have followed from strace() > showing openat(2) to path_openat() doing the alloc_empty_file(). Thanks. This is not the right patch, but yes, patches 1-3 look very nice to me. > The idea is that filp_cachep stops being SLAB_ACCOUNT. Allocations that > want to be accounted immediately can use GFP_KERNEL_ACCOUNT. I did that > in alloc_empty_file_noaccount() (despite the contradictory name but the > noaccount refers to something else, right?) as IIUC it's about > kernel-internal opens. Yeah, the "noaccount" function is about not accounting it towards nr_files. That said, I don't think it necessarily needs to do the memory accounting either - it's literally for cases where we're never going to install the file descriptor in any user space. Your change to use GFP_KERNEL_ACCOUNT isn't exactly wrong, but I don't think it's really the right thing either, because > Why is this unfinished: > > - there are other callers of alloc_empty_file() which I didn't adjust so > they simply became memcg-unaccounted. I haven't investigated for which > ones it would make also sense to separate the allocation and accounting. > Maybe alloc_empty_file() would need to get a parameter to control > this. Right. I think the natural and logical way to deal with this is to just say "we account when we add the file to the fdtable". IOW, just have fd_install() do it. That's the really natural point, and also makes it very logical why alloc_empty_file_noaccount() wouldn't need to do the GFP_KERNEL_ACCOUNT. > - I don't know how to properly unwind the accounting failure case. It > seems like a new case because when we succeed the open, there's no > further error path at least in path_openat(). Yeah, let me think about this part. Becasue fd_install() is the right point, but that too does not really allow for error handling. Yes, we could close things and fail it, but it really is much too late at this point. What I *think* I'd want for this case is (a) allow the accounting to go over by a bit (b) make sure there's a cheap way to ask (before) about "did we go over the limit" IOW, the accounting never needed to be byte-accurate to begin with, and making it fail (cheaply and early) on the next file allocation is fine. Just make it really cheap. Can we do that? For example, maybe don't bother with the whole "bytes and pages" stuff. Just a simple "are we more than one page over?" kind of question. Without the 'stock_lock' mess for sub-page bytes etc How would that look? Would it result in something that can be done cheaply without locking and atomics and without excessive pointer indirection through many levels of memcg data structures? Linus