On Fri, Jul 15, 2022 at 09:35:38AM +0800, Ming Lei wrote: > On Fri, Jul 15, 2022 at 04:35:23AM +0530, Kanchan Joshi wrote: > > On Thu, Jul 14, 2022 at 11:14:32PM +0800, Ming Lei wrote: > > > On Wed, Jul 13, 2022 at 11:07:57AM +0530, Kanchan Joshi wrote: > > > > > > > > > The way I would do this that in nvme_ioucmd_failover_req (or in the > > > > > > > > > retry driven from command retriable failure) I would do the above, > > > > > > > > > requeue it and kick the requeue work, to go over the requeue_list and > > > > > > > > > just execute them again. Not sure why you even need an explicit retry > > > > > > > > > code. > > > > > > > > During retry we need passthrough command. But passthrough command is not > > > > > > > > stable (i.e. valid only during first submission). We can make it stable > > > > > > > > either by: > > > > > > > > (a) allocating in nvme (b) return -EAGAIN to io_uring, and > > > > > > > > it will do allocate + deferral > > > > > > > > Both add a cost. And since any command can potentially fail, that > > > > > > > > means taking that cost for every IO that we issue on mpath node. Even if > > > > > > > > no failure (initial or subsquent after IO) occcured. > > > > > > > > > > > > > > As mentioned, I think that if a driver consumes a command as queued, > > > > > > > it needs a stable copy for a later reformation of the request for > > > > > > > failover purposes. > > > > > > > > > > > > So what do you propose to make that stable? > > > > > > As I mentioned earlier, stable copy requires allocating/copying in fast > > > > > > path. And for a condition (failover) that may not even occur. > > > > > > I really think currrent solution is much better as it does not try to make > > > > > > it stable. Rather it assembles pieces of passthrough command if retry > > > > > > (which is rare) happens. > > > > > > > > > > Well, I can understand that io_uring_cmd is space constrained, otherwise > > > > > we wouldn't be having this discussion. > > > > > > > > Indeed. If we had space for keeping passthrough command stable for > > > > retry, that would really have simplified the plumbing. Retry logic would > > > > be same as first submission. > > > > > > > > > However io_kiocb is less > > > > > constrained, and could be used as a context to hold such a space. > > > > > > > > > > Even if it is undesired to have io_kiocb be passed to uring_cmd(), it > > > > > can still hold a driver specific space paired with a helper to obtain it > > > > > (i.e. something like io_uring_cmd_to_driver_ctx(ioucmd) ). Then if the > > > > > space is pre-allocated it is only a small memory copy for a stable copy > > > > > that would allow a saner failover design. > > > > > > > > I am thinking along the same lines, but it's not about few bytes of > > > > space rather we need 80 (72 to be precise). Will think more, but > > > > these 72 bytes really stand tall in front of my optimism. > > > > > > > > Do you see anything is possible in nvme-side? > > > > Now also passthrough command (although in a modified form) gets copied > > > > into this preallocated space i.e. nvme_req(req)->cmd. This part - > > > > > > I understand it can't be allocated in nvme request which is freed > > > during retry, > > > > Why not. Yes it gets freed, but we have control over when it gets freed > > and we can do if anything needs to be done before freeing it. Please see > > below as well. > > This way requires you to hold the old request until one new path is > found, and it is fragile. > > What if there isn't any path available then controller tries to > reset the path? If the requeue or io_uring_cmd holds the old request, > it might cause error recovery hang or make error handler code more > complicated. > > > > > > and looks the extra space has to be bound with > > > io_uring_cmd. > > > > if extra space is bound with io_uring_cmd, it helps to reduce the code > > (and just that. I don't see that efficiency will improve - rather it > > Does retry have to be efficient? > > > will be tad bit less because of one more 72 byte copy opeation in fast-path). > > Allocating one buffer and bind it with io_uring_cmd in case of retry is actually > similar with current model, retry is triggered by FS bio, and the allocated > buffer can play similar role with FS bio. oops, just think of SQE data only valid during submission, so the buffer has to be allocated during 1st submission, but the allocation for each io_uring_cmd shouldn't cost much by creating one slab cache, especially inline data size of io_uring_cmd has been 56(24 + 32) bytes. thanks, Ming