Re: [Linaro-mm-sig] [PATCH 2/3] udmabuf: Sync buffer mappings for attached devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 26.01.24 um 18:24 schrieb Andrew Davis:
On 1/25/24 2:30 PM, Daniel Vetter wrote:
On Tue, Jan 23, 2024 at 04:12:26PM -0600, Andrew Davis wrote:
Currently this driver creates a SGT table using the CPU as the
target device, then performs the dma_sync operations against
that SGT. This is backwards to how DMA-BUFs are supposed to behave.
This may have worked for the case where these buffers were given
only back to the same CPU that produced them as in the QEMU case.
And only then because the original author had the dma_sync
operations also backwards, syncing for the "device" on begin_cpu.
This was noticed and "fixed" in this patch[0].

That then meant we were sync'ing from the CPU to the CPU using
a pseudo-device "miscdevice". Which then caused another issue
due to the miscdevice not having a proper DMA mask (and why should
it, the CPU is not a DMA device). The fix for that was an even
more egregious hack[1] that declares the CPU is coherent with
itself and can access its own memory space..

Unwind all this and perform the correct action by doing the dma_sync
operations for each device currently attached to the backing buffer.

[0] commit 1ffe09590121 ("udmabuf: fix dma-buf cpu access")
[1] commit 9e9fa6a9198b ("udmabuf: Set the DMA mask for the udmabuf device (v2)")

Signed-off-by: Andrew Davis <afd@xxxxxx>

So yeah the above hacks are terrible, but I don't think this is better.
What you're doing now is that you're potentially doing the flushing
multiple times, so if you have a lot of importers with life mappings this
is a performance regression.

I'd take lower performing but correct than fast and broken. :)

Syncing for CPU/device is about making sure the CPU/device can see
the data produced by the other. Some devices might be dma-coherent
and syncing for them would be a NOP, but we cant know that here
in this driver. Let's say we have two attached devices, one that
is cache coherent and one that isn't. If we only sync for first
attached device then that is converted to a NOP and we never flush
like the second device needed.

Same is true for devices behind IOMMU or with an L3 cache when
syncing in the other direction for CPU. So we have to sync for all
attached devices to ensure we get even the lowest common denominator
device sync'd. It is up to the DMA-API layer to decide which syncs
need to actually do something. If all attached devices are coherent
then all syncs will be NOPs and we have no performance penalty.


It's probably time to bite the bullet and teach the dma-api about flushing
for multiple devices. Or some way we can figure out which is the one
device we need to pick which gives us the right amount of flushing.


Seems like a constraint solving micro-optimization. The DMA-API layer
would have to track which buffers have already been flushed from CPU
cache and also track that nothing has been written into those caches
since that point, only then could it skip the flush. But that is already
the point of the dirty bit in the caches themselves, cleaning already
clean cache lines is essentially free in hardware. And so is invalidating
lines, it is just flipping a bit.

Well to separate the functionality a bit. What the DMA-API should provide is abstracting how the platform does flushing and invalidation of caches and the information which devices uses which caches and what needs to be flushed/invalidated to allow access between devices and the CPU.

In other words what's necessary is the following:
1. sync device to cpu
2. sync cpu to device
3. sync device to device

1 and 2 are already present and implemented for years, but 3 is missing together with some of the necessary infrastructure to actually implement this. E.g. we don't know which devices write into which caches etc...

On top of this we need the functionality to track who has accessed which piece of data and what DMA-API functions needs to be called to make things work for a specific use case. But this is then DMA-buf, I/O layer drivers etc.. and should not belong into the DMA-API.

I also strongly think that putting the SWIOTLB bounce buffer functionality into the DMA-API was not the right choice.

Regards,
Christian.


Andrew

Cheers, Sima

---
  drivers/dma-buf/udmabuf.c | 41 +++++++++++++++------------------------
  1 file changed, 16 insertions(+), 25 deletions(-)

diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
index 3a23f0a7d112a..ab6764322523c 100644
--- a/drivers/dma-buf/udmabuf.c
+++ b/drivers/dma-buf/udmabuf.c
@@ -26,8 +26,6 @@ MODULE_PARM_DESC(size_limit_mb, "Max size of a dmabuf, in megabytes. Default is
  struct udmabuf {
      pgoff_t pagecount;
      struct page **pages;
-    struct sg_table *sg;
-    struct miscdevice *device;
      struct list_head attachments;
      struct mutex lock;
  };
@@ -169,12 +167,8 @@ static void unmap_udmabuf(struct dma_buf_attachment *at,
  static void release_udmabuf(struct dma_buf *buf)
  {
      struct udmabuf *ubuf = buf->priv;
-    struct device *dev = ubuf->device->this_device;
      pgoff_t pg;
  -    if (ubuf->sg)
-        put_sg_table(dev, ubuf->sg, DMA_BIDIRECTIONAL);
-
      for (pg = 0; pg < ubuf->pagecount; pg++)
          put_page(ubuf->pages[pg]);
      kfree(ubuf->pages);
@@ -185,33 +179,31 @@ static int begin_cpu_udmabuf(struct dma_buf *buf,
                   enum dma_data_direction direction)
  {
      struct udmabuf *ubuf = buf->priv;
-    struct device *dev = ubuf->device->this_device;
-    int ret = 0;
-
-    if (!ubuf->sg) {
-        ubuf->sg = get_sg_table(dev, buf, direction);
-        if (IS_ERR(ubuf->sg)) {
-            ret = PTR_ERR(ubuf->sg);
-            ubuf->sg = NULL;
-        }
-    } else {
-        dma_sync_sg_for_cpu(dev, ubuf->sg->sgl, ubuf->sg->nents,
-                    direction);
-    }
+    struct udmabuf_attachment *a;
  -    return ret;
+    mutex_lock(&ubuf->lock);
+
+    list_for_each_entry(a, &ubuf->attachments, list)
+        dma_sync_sgtable_for_cpu(a->dev, a->table, direction);
+
+    mutex_unlock(&ubuf->lock);
+
+    return 0;
  }
    static int end_cpu_udmabuf(struct dma_buf *buf,
                 enum dma_data_direction direction)
  {
      struct udmabuf *ubuf = buf->priv;
-    struct device *dev = ubuf->device->this_device;
+    struct udmabuf_attachment *a;
  -    if (!ubuf->sg)
-        return -EINVAL;
+    mutex_lock(&ubuf->lock);
+
+    list_for_each_entry(a, &ubuf->attachments, list)
+        dma_sync_sgtable_for_device(a->dev, a->table, direction);
+
+    mutex_unlock(&ubuf->lock);
  -    dma_sync_sg_for_device(dev, ubuf->sg->sgl, ubuf->sg->nents, direction);
      return 0;
  }
  @@ -307,7 +299,6 @@ static long udmabuf_create(struct miscdevice *device,
      exp_info.priv = ubuf;
      exp_info.flags = O_RDWR;
  -    ubuf->device = device;
      buf = dma_buf_export(&exp_info);
      if (IS_ERR(buf)) {
          ret = PTR_ERR(buf);
--
2.39.2

_______________________________________________
Linaro-mm-sig mailing list -- linaro-mm-sig@xxxxxxxxxxxxxxxx
To unsubscribe send an email to linaro-mm-sig-leave@xxxxxxxxxxxxxxxx





[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux