On 8/8/22 03:15, Dave Chinner wrote:
On Mon, Aug 08, 2022 at 02:13:41AM +0100, Matthew Wilcox wrote:
On Mon, Aug 08, 2022 at 10:21:24AM +1000, Dave Chinner wrote:
+#ifdef CONFIG_HAS_DMA
+ void *(*dma_map)(struct file *, struct bio_vec *, int);
+ void (*dma_unmap)(struct file *, void *);
+#endif
This just smells wrong. Using a block layer specific construct as a
primary file operation parameter shouts "layering violation" to me.
A bio_vec is also used for networking; it's in disguise as an skb_frag,
but it's there.
Which is just as awful. Just because it's done somewhere else
doesn't make it right.
What we really need is a callout that returns the bdevs that the
struct file is mapped to (one, or many), so the caller can then map
the memory addresses to the block devices itself. The caller then
needs to do an {file, offset, len} -> {bdev, sector, count}
translation so the io_uring code can then use the correct bdev and
dma mappings for the file offset that the user is doing IO to/from.
I don't even know if what you're proposing is possible. Consider a
network filesystem which might transparently be moved from one network
interface to another. I don't even know if the filesystem would know
which network device is going to be used for the IO at the time of
IO submission.
Sure, but nobody is suggesting we support direct DMA buffer mapping
and reuse for network devices right now, whereas we have working
code for block devices in front of us.
Networking is not so far away, with zerocopy tx landed the next target
is peer-to-peer, i.e. transfers from a device memory. It's nothing
new and was already tried out quite some time ago, but to be fair,
it's not ready yet as this patchset. In any case, they have to use
common infra, which means we can't rely on struct block_device.
The first idea was to have a callback returning a struct device
pointer and failing when the file can have multiple devices or change
them on the fly. Networking already has a hook to assign a device to
a socket, we just need to make it's immutable after the assignment.
From the userspace perspective, if host memory mapping failed it can
be re-registered as a normal io_uring registered buffer with no change
in the API on the submission side.
I like the idea to reserve ranges in the API for future use, but
as I understand it, io_uring would need to do device lookups based on
the I/O offset, which doesn't sound fast and I'm not convinced we want
to go this way now. Could work if the specified range covers only one
device but needs knowledge of how it's chunked and doesn't go well
when devices alternate every 4KB or so.
Another question is whether we want to have some kind of notion of
device groups so the userspace doesn't have to register a buffer
multiple times when the mapping can be shared b/w files.
What I want to see is broad-based generic block device based
filesysetm support, not niche functionality that can only work on a
single type of block device. Network filesystems and devices are a
*long* way from being able to do anything like this, so I don't see
a need to cater for them at this point in time.
When someone has a network device abstraction and network filesystem
that can do direct data placement based on that device abstraction,
then we can talk about the high level interface we should use to
drive it....
I think a totally different model is needed where we can find out if
the bvec contains pages which are already mapped to the device, and map
them if they aren't. That also handles a DM case where extra devices
are hot-added to a RAID, for example.
I cannot form a picture of what you are suggesting from such a brief
description. Care to explain in more detail?
--
Pavel Begunkov