Re: [RFC 0/3] Add BPF for io_uring

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 11, 2024 at 01:50:43AM +0000, Pavel Begunkov wrote:
> WARNING: it's an early prototype and could likely be broken and unsafe
> to run. Also, most probably it doesn't do the right thing from the
> modern BPF perspective, but that's fine as I want to get some numbers
> first and only then consult with BPF folks and brush it up.
> 
> A comeback of the io_uring BPF proposal put on top new infrastructure.
> Instead executing BPF as a new request type, it's now run in the io_uring
> waiting loop. The program is called to react every time we get a new
> event like a queued task_work or an interrupt. Patch 3 adds some helpers
> the BPF program can use to interact with io_uring like submitting new
> requests and looking at CQEs. It also controls when to return control
> back to user space by returning one of IOU_BPF_RET_{OK,STOP}, and sets
> the task_work batching size, i.e. how many CQEs to wait for it be run
> again, via a kfunc helper. We need to be able to sleep to submit
> requests, hence only sleepable BPF is allowed. 

I guess this way may break the existed interface of io_uring_enter(),
or at least one flag should be added to tell kernel that the wait behavior
will be overrided by bpf prog.

Also can you share how these perfect parameters may be calculated
by bpf prog? And why isn't io_uring kernel code capable of doing that?

> 
> BPF can help to create arbitrary relations between requests from
> within the kernel

Can you explain it in details about the `arbitrary relations`?

> and later help with tuning the wait loop batching.
> E.g. with minor extensions we can implement batch wait timeouts.
> We can also use it to let the user to safely access internal resources
> and maybe even do a more elaborate request setup than SQE allows it.
> 
> The benchmark is primitive, the non-BPF baseline issues a 2 nop request
> link at a time and waits for them to complete. The BPF version runs
> them (2 * N requests) one by one. Numbers with mitigations on:
> 
> # nice -n -20 taskset -c 0 ./minimal 0 50000000
> type 2-LINK, requests to run 50000000
> sec 10, total (ms) 10314
> # nice -n -20 taskset -c 0 ./minimal 1 50000000
> type BPF, requests to run 50000000
> sec 6, total (ms) 6808
> 
> It needs to be better tested, especially with asynchronous requests
> like reads and other hardware. It can also be further optimised. E.g.
> we can avoid extra locking by taking it once for BPF/task_work_run.
> 
> The test (see examples-bpf/minimal[.bpf].c)
> https://github.com/isilence/liburing.git io_uring-bpf
> https://github.com/isilence/liburing/tree/io_uring-bpf

Looks you pull bpftool & libbpf code into the example, and just
wondering why not link the example with libbpf directly?


Thanks, 
Ming





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux