Re: [PATCH net-next v3 3/3] page_pool: fix IOMMU crash when driver has already unbound

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2024-11-06 10:56 am, Yunsheng Lin wrote:
+cc Christoph & Marek

On 2024/11/6 4:11, Jesper Dangaard Brouer wrote:

...


I am not sure if I understand the reasoning behind the above suggestion to 'wait
and see if this actually turns out to be a problem' when we already know that there
are some cases which need cache kicking/flushing for the waiting to work and those
kicking/flushing may not be easy and may take indefinite time too, not to mention
there might be other cases that need kicking/flushing that we don't know yet.

Is there any reason not to consider recording the inflight pages so that unmapping
can be done for inflight pages before driver unbound supposing dynamic number of
inflight pages can be supported?

IOW, Is there any reason you and jesper taking it as axiomatic that recording the
inflight pages is bad supposing the inflight pages can be unlimited and recording
can be done with least performance overhead?

Well, page pool is a memory allocator, and it already has a mechanism to
handle returning of memory to it. You're proposing to add a second,
orthogonal, mechanism to do this, one that adds both overhead and

I would call it as a replacement/improvement for the old one instead of
'a second, orthogonal' as the old one doesn't really exist after this patch.


Yes, are proposing doing a very radical change to the page_pool design.
And this is getting proposed as a fix patch for IOMMU.

It is a very radical change that page_pool needs to keep track of *ALL* in-flight pages.

I am agreed that it is a radical change, that is why it is targetting net-next
tree instead of net tree even when there is a Fixes tag for it.

If there is a proper and non-radical way to fix that, I would prefer the
non-radical way too.


The DMA issue is a life-time issue of DMA object associated with the
struct device.  Then, why are you not looking at extending the life-time

It seems it is not really about the life-time of DMA object associated with the
life-time of 'struct device', it seems to be the life-time of DMA API associated
with the life-time of the driver for the 'struct device' from the the opinion of
experts from IOMMU/DMA subsystem in [1] & [2].

There is no "DMA object". The DMA API expects to be called with a valid device bound to a driver. There are parts in many different places all built around that expectation to varying degrees. Looking again, it seems dma_debug_device_change() has existed for way longer than the page_pool code, so frankly I'm a little surprised that this case is only coming up now in this context...

Even if one tries to handwave past that with a bogus argument that technically these DMA mappings belong to the subsystem rather than the driver itself, it is clearly unrealistic to imagine that once a device is torn down by device_del() it's still valid for anything. In fact, before even that point, it is explicitly documented that a device which is merely offlined prior to potential removal "cannot be used for any purpose", per Documentation/ABI/testing/sysfs-devices-online.

Holding a refcount so that the memory backing the struct device can still be accessed without a literal use-after-free does not represent the device being conceptually valid in any API-level sense. Even if the device isn't removed, as soon as its driver is unbound its DMA ops can change; the driver could then be re-bound, and the device valid for *new* DMA mappings again, but it's still bogus to attempt to unmap outstanding old mappings through the new ops (which is just as likely to throw an error/crash/corrupt memory/whatever). The page pool DMA mapping design is just fundamentally incorrect with respect to the device/driver model lifecycle.

I am not sure what is reasoning behind the above, but the implementation seems
to be the case as mentioned in [3]:
__device_release_driver -> device_unbind_cleanup -> arch_teardown_dma_ops

1. https://lkml.org/lkml/2024/8/6/632
2. https://lore.kernel.org/all/20240923175226.GC9634@xxxxxxxx/
3. https://lkml.org/lkml/2024/10/15/686

of the DMA object, or at least detect when DMA object goes away, such
that we can change a setting in page_pool to stop calling DMA unmap for
the pages in-flight once they get returned (which we have en existing
mechanism for).

To be honest, I was mostly depending on the opinion of the experts from
IOMMU/DMA subsystem for the correct DMA API usage as mentioned above.
So I am not sure if skipping DMA unmapping for the inflight pages is the
correct DMA API usage?
If it is the correct DMA API usage, how to detect that if DMA unmapping
can be skipped?

Once page_pool has allowed a driver to unbind from a device without cleaning up all outstanding DMA mappings made via that device, then it has already leaked those mappings and the damage is done, regardless of whether the effects are visible yet. If you'd really rather play whack-a-mole trying to paper over secondary symptoms of that issue than actually fix it, then fine, but don't expect any driver core/DMA API changes to be acceptable for that purpose. However, if people are hitting those symptoms now, then I'd imagine they're eventually going to come back asking about the ones which can't be papered over, like dma-debug reporting the leaks, or just why their system ends up burning gigabytes on IOMMU pagetables and IOVA kmem_caches.

Thanks,
Robin.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux