On 1/24/24 3:24 AM, Christoph Hellwig wrote: > We can't change the host-wide limit here (it wouldn't apply to all > LUs anyway). If your limit is per-LU, you can call > blk_queue_max_hw_sectors from ->slave_configure. Unfortunately, it doesn't look like slave_configure gets called in the scenario in question. In this case we already have a scsi_device created but its in devloss state and the FC transport layer is bringing it back online. There is also the point that Mike brought up in that if fast fail tmo has not yet fired, there could be I/O still in the queue that is now too large. To answer your earlier question, Mike, if the VIOS receives a request that is too large it closes the CRQ, forcing an entire reinit / discovery, so its definitely not something we want to encounter. I'm trying to get this behavior improved so that only the one command fails, but that's not what happens today. I suppose I could iterate through all the LUNs and call blk_queue_max_hw_sectors on them, but I'm not sure if that solves the problem. It would close the window that Mike highlighted, but if there are commands outstanding when this occurs that are larger than the new max_hw_sectors and they get requeued, will they get split in the block layer when they get resent to the LLD or will they just get resent as-is? If its the latter, I'd get a request larger than what I can support. -Brian -- Brian King Power Linux I/O IBM Linux Technology Center