Re: [PATCH 1/1] iomap: propagate nowait to block layer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/26/25 20:49, Dave Chinner wrote:
On Wed, Feb 26, 2025 at 12:33:21PM +0000, Pavel Begunkov wrote:
On 2/26/25 04:52, Dave Chinner wrote:
On Wed, Feb 26, 2025 at 01:33:58AM +0000, Pavel Begunkov wrote:
...
Put simply: any code that submits multiple bios (either individually
or as a bio chain) for a single high level IO can not use REQ_NOWAIT
reliably for async IO submission.

I know the issue, but admittedly forgot about it here, thanks for
reminding! Considering that attempts to change the situation failed
some years ago and I haven't heard about it after, I don't think
it'll going to change any time soon.

So how about to follow what the block layer does and disable multi
bio nowait submissions for async IO?

if (!iocb_is_sync(iocb)) {
	if (multi_bio)
		return -EAGAIN;
	bio_opf |= REQ_NOWAIT;
}

How do we know it's going to be multi-bio before we actually start
packing the data into the bios? More below, because I kinda pointed

The same way block layer is gauging it

nr = bio_iov_vecs_to_alloc(iter, BIO_MAX_VECS + 1);
if (nr > BIO_MAX_VECS)
	// multiple bios

Let me try to prepare that one, and we can discuss there.

out how this might be solved...

Is there anything else but io_uring and AIO that can issue async
IO though this path?

We can't assume anything about the callers in the lower layers.
Anything that can call the VFS read/write paths could be using async
IO.

We have similar limitations on IO polling (IOCB_HIPRI) in iomap, but
I'm not sure if REQ_NOWAIT can be handled the same way. i.e. only
setting REQ_NOWAIT on the first bio means that the second+ bio can
still block and cause latency issues.

Please have a look at how IOCB_HIPRI is handled by iomap for
multi-bio IOs. I -think- the same can be done with IOMAP_NOWAIT
bios, because the bio IO completion for the EAGAIN error will be
present on the iomap_dio by the time submit_bio returns. i.e.
REQ_NOWAIT can be set on the first bio in the submission chain,
but only on the first bio.

i.e. if REQ_NOWAIT causes the first bio submission to fail with
-EAGAIN being reported to completion, we abort the submission or
more bios because dio->error is now set. As there are no actual bios
in flight at this point in time, the only reference to the iomap_dio
is held by the iomap submission code.  Hence as we finalise the
aborted DIO submission, __iomap_dio_rw() drops the last reference
and iomap_dio_rw() calls iomap_dio_complete() on the iomap_dio. This
then gathers the -EAGAIN error that was stashed in the iomap_dio
and returns it to the caller.

i.e. I *think* this "REQ_NOWAIT only for the first bio" method will
solve most of the issues that cause submission latency (especially
for apps doing small IOs), but still behave correctly when large,
multi-bio DIOs are submitted.

Confirming that the logic is sound and writing fstests that exercise
the functionality to demonstrate your eventual kernel change works
correctly (and that we don't break it in future) is your problem,
though.

IIUC, it'll try to probe if block can accommodate one bio. Let's say
it can, however if there are more bios to the request they might
sleep. And that's far from improbable, especially with the first bio
taking one tag. Unless I missed something, it doesn't really looks
like a solution.

So, yeah, fixing this source of latency is not as simple as just
setting REQ_NOWAIT. I don't know if there is a better solution that
what we currently have, but causing large AIO DIOs to
randomly fail with EAGAIN reported at IO completion (with the likely
result of unexpected data corruption) is far worse behaviour that
occasionally having to deal with a long IO submission latency.

By the end of the day, it's waiting for IO, the first and very thing
the user don't want to see for async IO, and that's pretty much what
makes AIO borderline unusable.  We just can't have it for an asynchronous
interface.

Tough cookies. Random load related IO errors that can result in
unexpected user data corruption is a far worse outcome than an
application suffering from a bit of unexpected latency. You are not
going to win that argument, so don't bother wasting time on it.

I didn't argue with that, the goal is to not have either. The 3rd
dimension is efficiency, and it's likely where compromise will need to
be. Executing all fs IO in a worker is too punitive for performance,
but doing that for multi bio IO and attempting async if there is a
single bio should be reasonable enough.

If we can't fix it up here, the only other option I see
is to push all such io_uring requests to a slow path where we can
block, and that'd be quite a large regression.

Don't be so melodramatic. Async IO has always been, and will always

It's not melodramatic, just pointing that the alternative is ugly, and
I don't see any good way to work it around in io_uring, so would really
love to find something that will work.

be, -best effort- within the constraints of filesystem
implementation, data integrity and behavioural correctness.

The thing is, it blocks all requests in the submission queue as well as
handling of inflight requests, which can get pretty ugly pretty fast.
The choice the user have to make is usually not whether the latency is
tolerable, but rather whether the async interface is reliable or should
it use worker threads instead. Unfortunately, the alternative approach
has already failed.

Yes, things can happen, c'est la vie, but if we're reported a problem
io_uring should get it sorted somehow.

--
Pavel Begunkov





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux