From: Julian Wiedmann <jwi@xxxxxxxxxxxxx> zfcp_qdio_send() and zfcp_qdio_int_req() run concurrently, adding and completing SBALs on the Request Queue. There's a theoretical race where zfcp_qdio_int_req() completes a number of SBALs & increments the queue's free-level _before_ zfcp_qdio_send() was able to decrement it. This can cause ->req_q_free to momentarily hold a value larger than QDIO_MAX_BUFFERS_PER_Q. Luckily zfcp_qdio_send() is always called under ->req_q_lock, and all readers of the free-level also take this lock. So we can trust that zfcp_qdio_send() will clean up such a temporary overflow before anyone can actually observe it. But it's still confusing and annoying to worry about. So adjust the code to avoid this race. Signed-off-by: Julian Wiedmann <jwi@xxxxxxxxxxxxx> Reviewed-by: Steffen Maier <maier@xxxxxxxxxxxxx> Signed-off-by: Benjamin Block <bblock@xxxxxxxxxxxxx> --- drivers/s390/scsi/zfcp_qdio.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/s390/scsi/zfcp_qdio.c b/drivers/s390/scsi/zfcp_qdio.c index d3d110a04884..e78d65bd46b1 100644 --- a/drivers/s390/scsi/zfcp_qdio.c +++ b/drivers/s390/scsi/zfcp_qdio.c @@ -260,17 +260,20 @@ int zfcp_qdio_send(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req) zfcp_qdio_account(qdio); spin_unlock(&qdio->stat_lock); + atomic_sub(sbal_number, &qdio->req_q_free); + retval = do_QDIO(qdio->adapter->ccw_device, QDIO_FLAG_SYNC_OUTPUT, 0, q_req->sbal_first, sbal_number); if (unlikely(retval)) { + /* Failed to submit the IO, roll back our modifications. */ + atomic_add(sbal_number, &qdio->req_q_free); zfcp_qdio_zero_sbals(qdio->req_q, q_req->sbal_first, sbal_number); return retval; } /* account for transferred buffers */ - atomic_sub(sbal_number, &qdio->req_q_free); qdio->req_q_idx += sbal_number; qdio->req_q_idx %= QDIO_MAX_BUFFERS_PER_Q; -- 2.26.2