Well, usb-storage obviously seems to do it, and the block layer
does not prohibit it.
Also loop, nvme-tcp and then I stopped looking.
Any objections about adding local_bh_disable() around it?
To me it seems like the whole IPI plus potentially softirq dance is
a little pointless when completing from process context.
I agree.
Sagi, any opinion on that from the nvme-tcp POV?
nvme-tcp should (almost) always complete from the context that matches
the rq->mq_ctx->cpu as the thread that processes incoming
completions (per hctx) should be affinitized to match it (unless cpus
come and go).
in which context?
Not sure what is the question.
But this is probably nr_hw_queues > 1?
Yes.
So for nvme-tcp I don't expect blk_mq_complete_need_ipi to return true
in normal operation. That leaves the teardowns+aborts, which aren't very
interesting here.
The process context invocation is nvme_tcp_complete_timed_out().
Yes.
I would note that nvme-tcp does not go to sleep after completing every
I/O like how sebastian indicated usb does.
Having said that, today the network stack is calling nvme_tcp_data_ready
in napi context (softirq) which in turn triggers the queue thread to
handle network rx (and complete the I/O). It's been measured recently
that running the rx context directly in softirq will save some
latency (possible because nvme-tcp rx context is non-blocking).
So I'd think that patch #2 is unnecessary and just add overhead for
nvme-tcp.. do note that the napi softirq cpu mapping depends on the RSS
steering, which is unlikely to match rq->mq_ctx->cpu, hence if completed
from napi context, nvme-tcp will probably always go to the IPI path.
but running it in softirq on the remote CPU would still allow of other
packets to come on the remote CPU (which would block BLOCK sofirq if
NET_RX is already running).
Not sure I understand your comment, if napi triggers on core X and we
complete from that, it will trigger IPI to core Y, and there with patch
#2 is will trigger softirq instead of calling ->complete directly no?