On 10/31/24 15:42, Jens Axboe wrote:
On 10/31/24 9:25 AM, Pavel Begunkov wrote:
On 10/31/24 14:29, Jens Axboe wrote:
On 10/31/24 7:25 AM, Pavel Begunkov wrote:
On 10/30/24 02:43, Jens Axboe wrote:
On 10/29/24 8:03 PM, Ming Lei wrote:
On Tue, Oct 29, 2024 at 03:26:37PM -0600, Jens Axboe wrote:
On 10/29/24 2:06 PM, Jens Axboe wrote:
On 10/29/24 1:18 PM, Jens Axboe wrote:
...
+ node->buf = imu;
+ node->kbuf_fn = kbuf_fn;
+ return node;
Also this function needs to register the buffer to table with one
pre-defined buf index, then the following request can use it by
the way of io_prep_rw_fixed().
It should not register it with the table, the whole point is to keep
this node only per-submission discoverable. If you're grabbing random
request pages, then it very much is a bit finicky
Registering it into the table has enough of design and flexibility
merits: error handling, allowing any type of dependencies of requests
by handling it in the user space, etc.
Right, but it has to be a special table. See my lengthier reply to Ming.
Mind pointing the specific part? I read through the thread and didn't
see why it _has_ to be a special table.
And by "special" I assume you mean the property of it being cleaned up
/ flushed by the end of submission / syscall, right?
Right, that's all I mean, special in the sense that it isn't persistent.
Nothing special about it otherwise. Maybe "separate table from
buf_table" is a more accurate way to describe it.
The initial POC did install it into a table, it's just a one-slot table,
By "table" I actually mean anything that survives beyond the current
syscall / submission and potentially can be used by requests submitted
with another syscall.
Obviously there's nothing special about the above mentioned table in the
sense that it's a normal table, it just doesn't survive beyond the
current submission. The reason why I like that approach is that it
doesn't leave potentially iffy data in a table beyond that submission.
If we don't own this data, it's merely borrowed from someone else, then
special case must be taken around it.
So you're not trying to prevent some potential malicious use
but rather making it a bit nicer for buggy users. I don't think I
care much about that aspect and would sacrifice the property if it
gives us anything good anywhere else.
io_submit_state. I think the right approach is to have an actual struct
io_rsrc_data local_table in the ctx, with refs put at the end of submit.
Same kind of concept, just allows for more entries (potentially), with
the same requirement that nodes get put when submit ends. IOW, requests
need to find it within the same submit.
Obviously you would not NEED to do that, but if the use case is grabbing
bvecs out of a request, then it very much should not be discoverable
past the initial assignments within that submit scope.
and needs to be of
limited scope.
And I don't think we can force it, neither with limiting exposure to
submission only nor with the Ming's group based approach. The user can
always queue a request that will never complete and/or by using
DEFER_TASKRUN and just not letting it run. In this sense it might be
dangerous to block requests of an average system shared block device,
but if it's fine with ublk it sounds like it should be fine for any of
the aforementioned approaches.
As long as the resource remains valid until the last put of the node,
then it should be OK. Yes the application can mess things up in terms of
It should be fine in terms of buffers staying alive. The "dangerous"
part I mentioned is about abuse of a shared resource, e.g. one
container locking up all requests of a bdev so that another container
can't do any IO, maybe even with an fs on top. Nevertheless, it's ublk,
I don't think we need to concern about that much since io_uring is
on the other side from normal user space.
If you leave it in the table, then you can no longer rely on the final
put being the callback driver. Maybe this is fine, but then it needs
some other mechanism for this.
Not sure I follow. The ->kbuf_fn is set by the driver, right?
It'll always be called once the node is destroyed, in this sense
the final destination is always the driver that leased the buffer.
Or do you mean the final rsrc_node put? Not sure how that works
considering requests can complete inside the submission as well
as outlive it with the node reference.
latency if it uses one of these bufs for eg a read on a pipe that never
gets any data, but the data will remain valid regardless. And that's
very much a "doctor it hurts when I..." case, it should not cause any
Right, I care about malicious abuse when it can affect other users,
break isolation / fairness, etc., I'm saying that there is no
difference between all the approaches in this aspect, and if so
it should also be perfectly ok from the kernel's perspective to allow
to leave a buffer in the table long term. If the user wants to screw
itself and doesn't remove the buffer that's the user's choice to
shoot itself in the leg.
From this angle, that I look at the auto removal you add not as some
security / etc. concern, but just as a QoL / performance feature so
that the user doesn't need to remove the buffer by hand.
FWIW, instead of having another table, we can just mark a sub range
of the main buffer table to be cleared every time after submission,
just like we separate auto slot allocation with ranges.
I did consider that idea too, mainly from the perspective of then not
needing any kind of special OP or OP support to grab one of these
buffers, it'd just use the normal table but in a separate range. Doesn't
feel super clean, and does require some odd setup. Realistically,
I feel like we don't even need to differentiate it from normal
reg buffers in how it's used by other opcodes, cleaning the table
is just a feature, I'd even argues an optional one.
applications probabably use one or the other and not combined, so
perhaps it's fine and the range is just the normal range. If they do mix
the two, then yeah they would want to use separate ranges for them.
Honestly don't care too deeply about that implementation detail, I care
more about having these buffers be io_rsrc_node and using the general
infrastructure for them. If we have to add IORING_RSRC_KBUFFER for them
and a callback + data field to io_rsrc_node, that still a much better
Right, and I don't think it's a problem at all, for most of the
users destroying a resource is cold path anyway, apart from this
zc proposal nobody registers file/buffer just for one request.
approach than having some other intermediate type which basically does
the same thing, except it needs new fields to store it and new helpers
to alloc/put it.
--
Pavel Begunkov