Re: [PATCH v4] dma-buf: Add DmaBufTotal counter in meminfo

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 20.04.21 um 09:46 schrieb Michal Hocko:
On Tue 20-04-21 09:32:14, Christian König wrote:
Am 20.04.21 um 09:04 schrieb Michal Hocko:
On Mon 19-04-21 18:37:13, Christian König wrote:
Am 19.04.21 um 18:11 schrieb Michal Hocko:
[...]
What I am trying to bring up with NUMA side is that the same problem can
happen on per-node basis. Let's say that some user consumes unexpectedly
large amount of dma-buf on a certain node. This can lead to observable
performance impact on anybody on allocating from that node and even
worse cause an OOM for node bound consumers. How do I find out that it
was dma-buf that has caused the problem?
Yes, that is the direction my thinking goes as well, but also even further.

See DMA-buf is also used to share device local memory between processes as
well. In other words VRAM on graphics hardware.

On my test system here I have 32GB of system memory and 16GB of VRAM. I can
use DMA-buf to allocate that 16GB of VRAM quite easily which then shows up
under /proc/meminfo as used memory.
This is something that would be really interesting in the changelog. I
mean the expected and extreme memory consumption of this memory. Ideally
with some hints on what to do when the number is really high (e.g. mount
debugfs and have a look here and there to check whether this is just too
many users or an unexpected pattern to be reported).

But that isn't really system memory at all, it's just allocated device
memory.
OK, that was not really clear to me. So this is not really accounted to
MemTotal?

It depends. In a lot of embedded systems you only have system memory and in this case that value here is indeed really useful.

If that is really the case then reporting it into the oom
report is completely pointless and I am not even sure /proc/meminfo is
the right interface either. It would just add more confusion I am
afraid.

I kind of agree. As I said a DMA-buf could be backed by system memory or device memory.

In the case when it is backed by system memory it does make sense to report this in an OOM dump.

But only the exporting driver knows what the DMA-buf handle represents, the framework just provides the common ground for inter driver communication.

See where I am heading?
Yeah, totally. Thanks for pointing this out.

Suggestions how to handle that?
As I've pointed out in previous reply we do have an API to account per
node memory but now that you have brought up that this is not something
we account as a regular memory then this doesn't really fit into that
model. But maybe I am just confused.

Well does that API also has a counter for memory used by device drivers?

If yes then the device driver who exported the DMA-buf should probably use that API. If no we might want to create one.

I mean the author of this patch seems to have an use case where this is needed and I also see that we have some hole in how we account memory.

Christian.
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux