Re: [PATCH RFC v2 00/19] fuse: fuse-over-io-uring

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 6/12/24 01:35, Kent Overstreet wrote:
On Tue, Jun 11, 2024 at 07:37:30PM GMT, Bernd Schubert wrote:


On 6/11/24 17:35, Miklos Szeredi wrote:
On Tue, 11 Jun 2024 at 12:26, Bernd Schubert <bernd.schubert@xxxxxxxxxxx> wrote:

Secondly, with IORING_OP_URING_CMD we already have only a single command
to submit requests and fetch the next one - half of the system calls.

Wouldn't IORING_OP_READV/IORING_OP_WRITEV be just this approach?
https://github.com/uroni/fuseuring?
I.e. it hook into the existing fuse and just changes from read()/write()
of /dev/fuse to io-uring of /dev/fuse. With the disadvantage of zero
control which ring/queue and which ring-entry handles the request.

Unlike system calls, io_uring ops should have very little overhead.
That's one of the main selling points of io_uring (as described in the
io_uring(7) man page).

So I don't think it matters to performance whether there's a combined
WRITEV + READV (or COMMIT + FETCH) op or separate ops.

This has to be performance proven and is no means what I'm seeing. How
should io-uring improve performance if you have the same number of
system calls?

As I see it (@Jens or @Pavel or anyone else please correct me if I'm
wrong), advantage of io-uring comes when there is no syscall overhead at
all - either you have a ring with multiple entries and then one side
operates on multiple entries or you have polling and no syscall overhead
either. We cannot afford cpu intensive polling - out of question,
besides that I had even tried SQPOLL and it made things worse (that is
actually where my idea about application polling comes from).
As I see it, for sync blocking calls (like meta operations) with one
entry in the queue, you would get no advantage with
IORING_OP_READV/IORING_OP_WRITEV -  io-uring has  do two system calls -
one to submit from kernel to userspace and another from userspace to
kernel. Why should io-uring be faster there?

And from my testing this is exactly what I had seen - io-uring for meta
requests (i.e. without a large request queue and *without* core
affinity) makes meta operations even slower that /dev/fuse.

For anything that imposes a large ring queue and where either side
(kernel or userspace) needs to process multiple ring entries - system
call overhead gets reduced by the queue size. Just for DIO or meta
operations that is hard to reach.

Also, if you are using IORING_OP_READV/IORING_OP_WRITEV, nothing would
change in fuse kernel? I.e. IOs would go via fuse_dev_read()?
I.e. we would not have encoded in the request which queue it belongs to?

Want to try out my new ringbuffer syscall?

I haven't yet dug far into the fuse protocol or /dev/fuse code yet, only
skimmed. But using it to replace the read/write syscall overhead should
be straightforward; you'll want to spin up a kthread for responding to
requests.

I will definitely look at it this week. Although I don't like the idea
to have a new kthread. We already have an application thread and have
the fuse server thread, why do we need another one?


The next thing I was going to look at is how you guys are using splice,
we want to get away from that too.

Well, Ming Lei is working on that for ublk_drv and I guess that new approach
could be adapted as well onto the current way of io-uring.
It _probably_ wouldn't work with IORING_OP_READV/IORING_OP_WRITEV.

https://lore.gnuweeb.org/io-uring/20240511001214.173711-6-ming.lei@xxxxxxxxxx/T/


Brian was also saying the fuse virtio_fs code may be worth
investigating, maybe that could be adapted?

I need to check, but really, the majority of the new additions
is just to set up things, shutdown and to have sanity checks.
Request sending/completing to/from the ring is not that much new lines.


Thanks,
Bernd




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux