On 9/17/22 11:44, Stefan Metzmacher wrote:
Am 17.09.22 um 11:16 schrieb Pavel Begunkov:
On 9/16/22 22:36, Stefan Metzmacher wrote:
Hi Pavel, hi Jens,
I did some initial testing with IORING_OP_SEND_ZC.
While reading the code I think I found a race that
can lead to IORING_CQE_F_MORE being missing even if
the net layer got references.
Hey Stefan,
Did you see some kind of buggy behaviour in userspace?
Apologies for the delay,
No I was just reading the code and found it a bit confusing,
and couldn't prove that we don't have a problem with loosing
a notif cqe.
If network sends anything it should return how many bytes
it queued for sending, otherwise there would be duplicated
packets / data on the other endpoint in userspace, and I
don't think any driver / lower layer would keep memory
after returning an error.
As I'm also working on a socket driver for smbdirect,
I already thought about how I could hook into
IORING_OP_SEND[MSG]_ZC, and for sendmsg I'd have
a loop sending individual fragments, which have a reference,
but if I find a connection drop after the first one, I'd
return ECONNRESET or EPIPE in order to get faster recovery
instead of announcing a short write to the caller.
I doesn't sound right for me, but neither I know samba to
really have an opinion. In any case, I see how it may be
more robust if we always try to push a notification cqe.
Will you send a patch?
If we would take my 5/5 we could also have a different
strategy to check decide if MORE/NOTIF is needed.
If notif->cqe.res is still 0 and io_notif_flush drops
the last reference we could go without MORE/NOTIF at all.
In all other cases we'd either set MORE/NOTIF at the end
of io_sendzc of in the fail hook.
I had a similar optimisation, i.e. when io_notif_flush() in
the submission path is dropping the last ref, but killed it
as it was completely useless, I haven't hit this path even
once even with UDP, not to mention TCP.
In any case, I was looking on a bit different problem, but
it should look much cleaner using the same approach, see
branch [1], and patch [3] for sendzc in particular.
[1] https://github.com/isilence/linux.git partial-fail
[2] https://github.com/isilence/linux/tree/io_uring/partial-fail
[3] https://github.com/isilence/linux/commit/acb4f9bf869e1c2542849e11d992a63d95f2b894
const struct io_op_def *def = &io_op_defs[req->opcode];
req_set_fail(req);
io_req_set_res(req, res, io_put_kbuf(req, IO_URING_F_UNLOCKED));
if (def->fail)
def->fail(req);
io_req_complete_post(req);
Will loose req->cqe.flags, but the fail hook in general looks like a good idea.
I just don't like those sporadic changes all across core io_uring
code also adding some overhead.
And don't we care about the other failure cases where req->cqe.flags gets overwritten?
We don't usually carry them around ->issue handler boundaries,
e.g. directly do io_post_aux_cqe(res, IORING_CQE_F_MORE);
IORING_CQE_F_BUFFER is a bit more trickier, but there is
special handling for this one and it wouldn't fit "set cflags
in advance" logic anyway.
iow, ->fail callback sounds good enough for now, we'll change
it later if needed.
--
Pavel Begunkov