Re: [PATCH v2 2/2] dma-buf: heaps: Map system heap pages as managed by linux vm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 03.02.21 um 03:02 schrieb Suren Baghdasaryan:
On Tue, Feb 2, 2021 at 5:39 PM Minchan Kim <minchan@xxxxxxxxxx> wrote:
On Tue, Feb 02, 2021 at 04:31:34PM -0800, Suren Baghdasaryan wrote:
Currently system heap maps its buffers with VM_PFNMAP flag using
remap_pfn_range. This results in such buffers not being accounted
for in PSS calculations because vm treats this memory as having no
page structs. Without page structs there are no counters representing
how many processes are mapping a page and therefore PSS calculation
is impossible.
Historically, ION driver used to map its buffers as VM_PFNMAP areas
due to memory carveouts that did not have page structs [1]. That
is not the case anymore and it seems there was desire to move away
from remap_pfn_range [2].
Dmabuf system heap design inherits this ION behavior and maps its
pages using remap_pfn_range even though allocated pages are backed
by page structs.
Replace remap_pfn_range with vm_insert_page, following Laura's suggestion
in [1]. This would allow correct PSS calculation for dmabufs.

[1] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdriverdev-devel.linuxdriverproject.narkive.com%2Fv0fJGpaD%2Fusing-ion-memory-for-direct-io&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7Cb4c145b86dd0472c943c08d8c7e7ba4b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637479145389160353%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=W1N%2B%2BlcFDaRSvXdSPe5hPNMRByHfGkU7Uc3cmM3FCTU%3D&amp;reserved=0
[2] https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fdriverdev.linuxdriverproject.org%2Fpipermail%2Fdriverdev-devel%2F2018-October%2F127519.html&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7Cb4c145b86dd0472c943c08d8c7e7ba4b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637479145389160353%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=jQxSzKEr52lUcAIx%2FuBHMJ7yOgof%2FVMlW9%2BB2f%2FoS%2FE%3D&amp;reserved=0
(sorry, could not find lore links for these discussions)

Suggested-by: Laura Abbott <labbott@xxxxxxxxxx>
Signed-off-by: Suren Baghdasaryan <surenb@xxxxxxxxxx>
Reviewed-by: Minchan Kim <minchan@xxxxxxxxxx>

A note: This patch makes dmabuf system heap accounted as PSS so
if someone has relies on the size, they will see the bloat.
IIRC, there was some debate whether PSS accounting for their
buffer is correct or not. If it'd be a problem, we need to
discuss how to solve it(maybe, vma->vm_flags and reintroduce
remap_pfn_range for them to be respected).
I did not see debates about not including *mapped* dmabufs into PSS
calculation. I remember people were discussing how to account dmabufs
referred only by the FD but that is a different discussion. If the
buffer is mapped into the address space of a process then IMHO
including it into PSS of that process is not controversial.

Well, I think it is. And to be honest this doesn't looks like a good idea to me since it will eventually lead to double accounting of system heap DMA-bufs.

As discussed multiple times it is illegal to use the struct page of a DMA-buf. This case here is a bit special since it is the owner of the pages which does that, but I'm not sure if this won't cause problems elsewhere as well.

A more appropriate solution would be to held processes accountable for resources they have allocated through device drivers.

Regards,
Christian.



[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]

  Powered by Linux