On Tue, Mar 29, 2022 at 8:21 AM Michal Koutný <mkoutny@xxxxxxxx> wrote: > > Hi. > > On Mon, Mar 28, 2022 at 03:59:44AM +0000, "T.J. Mercier" <tjmercier@xxxxxxxxxx> wrote: > > From: Hridya Valsaraju <hridya@xxxxxxxxxx> > > > > The dma_buf_charge_transfer function provides a way for processes to > > (s/dma_bug_charge_transfer/dma_bug_transfer_charge/) > Doh! Thanks. > > transfer charge of a buffer to a different process. This is essential > > for the cases where a central allocator process does allocations for > > various subsystems, hands over the fd to the client who requested the > > memory and drops all references to the allocated memory. > > I understood from [1] some buffers are backed by regular RAM. How are > these charges going to be transferred (if so)? > This link doesn't work for me, but I think you're referring to the discussion about your "RAM_backed_buffers" comment from March 23rd. I wanted to do a simple test to confirm my own understanding here, but that got delayed due to some problems on my end. Anyway the test I did goes like this: enable memcg and gpu cgoups tracking and run a process that allocates 100MiB of dmabufs. Observe memcg and gpu accounting values before and after the allocation. Before # cat memory.current gpu.memory.current 14909440 system 0 <Test program does the allocation of 100MiB of dmabufs> After # cat memory.current gpu.memory.current 48025600 system 104857600 So the memcg value increases by about 30 MiB while the gpu values increases by 100 MiB. This is with kmem enabled, and the /proc/maps file for this process indicates that the majority of that 30 MiB is kernel memory. I think this result shows that neither the kernel nor process memory overlap with the gpu cgroup tracking of these allocations. So despite the fact that these buffers are in main memory, they are allocated in a way that does not result in memcg attribution. (It looks to me like __GFP_ACCOUNT is not set for these.) > > Thanks, > Michal > > [1] > https://lore.kernel.org/r/CABdmKX2NSAKMC6rReMYfo2SSVNxEXcS466hk3qF6YFt-j-+_NQ@xxxxxxxxxxxxxx