On 6/12/24 18:24, Kent Overstreet wrote:
On Wed, Jun 12, 2024 at 06:15:57PM GMT, Bernd Schubert wrote:
On 6/12/24 17:55, Kent Overstreet wrote:
On Wed, Jun 12, 2024 at 03:40:14PM GMT, Bernd Schubert wrote:
On 6/12/24 16:19, Kent Overstreet wrote:
On Wed, Jun 12, 2024 at 03:53:42PM GMT, Bernd Schubert wrote:
I will definitely look at it this week. Although I don't like the idea
to have a new kthread. We already have an application thread and have
the fuse server thread, why do we need another one?
Ok, I hadn't found the fuse server thread - that should be fine.
The next thing I was going to look at is how you guys are using splice,
we want to get away from that too.
Well, Ming Lei is working on that for ublk_drv and I guess that new approach
could be adapted as well onto the current way of io-uring.
It _probably_ wouldn't work with IORING_OP_READV/IORING_OP_WRITEV.
https://lore.gnuweeb.org/io-uring/20240511001214.173711-6-ming.lei@xxxxxxxxxx/T/
Brian was also saying the fuse virtio_fs code may be worth
investigating, maybe that could be adapted?
I need to check, but really, the majority of the new additions
is just to set up things, shutdown and to have sanity checks.
Request sending/completing to/from the ring is not that much new lines.
What I'm wondering is how read/write requests are handled. Are the data
payloads going in the same ringbuffer as the commands? That could work,
if the ringbuffer is appropriately sized, but alignment is a an issue.
That is exactly the big discussion Miklos and I have. Basically in my
series another buffer is vmalloced, mmaped and then assigned to ring entries.
Fuse meta headers and application payload goes into that buffer.
In both kernel/userspace directions. io-uring only allows 80B, so only a
really small request would fit into it.
Well, the generic ringbuffer would lift that restriction.
Yeah, kind of. Instead allocating the buffer in fuse, it would be now allocated
in that code. At least all that setup code would be moved out of fuse. I will
eventually come to your patches today.
Now we only need to convince Miklos that your ring is better ;)
Legacy /dev/fuse has an alignment issue as payload follows directly as the fuse
header - intrinsically fixed in the ring patches.
*nod*
That's the big question, put the data inline (with potential alignment
hassles) or manage (and map) a separate data structure.
Maybe padding could be inserted to solve alignment?
Right now I have this struct:
struct fuse_ring_req {
union {
/* The first 4K are command data */
char ring_header[FUSE_RING_HEADER_BUF_SIZE];
struct {
uint64_t flags;
/* enum fuse_ring_buf_cmd */
uint32_t in_out_arg_len;
uint32_t padding;
/* kernel fills in, reads out */
union {
struct fuse_in_header in;
struct fuse_out_header out;
};
};
};
char in_out_arg[];
};
Data go into in_out_arg, i.e. headers are padded by the union.
I actually wonder if FUSE_RING_HEADER_BUF_SIZE should be page size
and not a fixed 4K.
I would make the commands variable sized, so that commands with no data
buffers don't need padding, and then when you do have a data command you
only pad out that specific command so that the data buffer starts on a
page boundary.
The same buffer is used for kernel to userspace and the other way around
- it is attached to the ring entry. Either direction will always have
data, where would a dynamic sizing then be useful?
Well, some "data" like the node id don't need to be aligned - we could
save memory for that. I still would like to have some padding so that
headers could be grown without any kind of compat issues. Though almost
4K is probably too much for that.
Thanks for pointing it out, will improve it!
Cheers,
Bernd