Re: [PATCH for-next 4/4] nvme-multipath: add multipathing for uring-passthrough commands

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Amongst all the other issue we've found the prime problem with SG_IO is that it needs to be directed to the 'active' path. For the device-mapper has a distinct callout (dm_prepare_ioctl), which essentially returns the current active path device. And then the device-mapper core issues the command on that active path.

All nice and good, _unless_ that command triggers an error.
Normally it'd be intercepted by the dm-multipath end_io handler, and would set the path to offline. But as ioctls do not use the normal I/O path the end_io handler is never called, and further SG_IO calls are happily routed down the failed path.

And the customer had to use SG_IO (or, in qemu-speak, LUN passthrough) as his application/filesystem makes heavy use of persistent reservations.

How did this conclude Hannes?

It didn't. The proposed interface got rejected, and now we need to come up with an alternative solution.
Which we haven't found yet.

Lets assume for the sake of discussion, had dm-mpath set a path to be
offline on ioctl errors, what would qemu do upon this error? blindly
retry? Until When? Or would qemu need to learn about the path tables in
order to know when there is at least one online path in order to retry?

IIRC that was one of the points why it got rejected.
Ideally we would return an errno indicating that the path had failed, but further paths are available, so a retry is in order. Once no paths are available qemu would be getting a different error indicating that all paths are failed.

There is no such no-paths-available error.


But we would be overloading a new meaning to existing error numbers, or even inventing our own error numbers. Which makes it rather awkward to use.

I agree that this sounds awkward.

Ideally we would be able to return this as the SG_IO status, as that is well capable of expressing these situations. But then we would need to parse and/or return the error ourselves, essentially moving sg_io funtionality into dm-mpath. Also not what one wants.

uring actually should send back the cqe for passthru, but there is no
concept like "Path error, but no paths are available".


What is the model that a passthru consumer needs to follow when
operating against a mpath device?

The model really is that passthru consumer needs to deal with these errors:
- No error (obviously)
- I/O error (error status will not change with a retry)
- Temporary/path related error (error status might change with a retry)

Then the consumer can decide whether to invoke a retry (for the last class), or whether it should pass up that error, as maybe there are applications with need a quick response time and can handle temporary failures (or, in fact, want to be informed about temporary failures).

IE the 'DNR' bit should serve nicely here, keeping in mind that we might need to 'fake' an NVMe error status if the connection is severed.

uring passthru sends the cqe status to userspace IIRC. But nothing in
there indicates about path availability. That would be something that
userspace would need to reconcile on its own from traversing sysfs or
alike...



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux