Tim, > SAS currently supports QD256, but the general consensus is that most > customers don't run anywhere near that deep. Does it help the system > for the HD to report a limited (256) max queue depth, or is it really > up to the system to decide many commands to queue? People often artificially lower the queue depth to avoid timeouts. The default timeout is 30 seconds from an I/O is queued. However, many enterprise applications set the timeout to 3-5 seconds. Which means that with deep queues you'll quickly start seeing timeouts if a drive temporarily is having issues keeping up (media errors, excessive spare track seeks, etc.). Well-behaved devices will return QF/TSF if they have transient resource starvation or exceed internal QoS limits. QF will cause the SCSI stack to reduce the number of I/Os in flight. This allows the drive to recover from its congested state and reduces the potential of application and filesystem timeouts. > Regarding number of SQ pairs, I think HDD would function well with > only one. Some thoughts on why we would want >1: > -A priority-based SQ servicing algorithm that would permit > low-priority commands to be queued in a dedicated SQ. > -The host may want an SQ per actuator for multi-actuator devices. That's fine. I think we're just saying that the common practice of allocating very deep queues for each CPU core in the system will lead to problems since the host will inevitably be able to queue much more I/O than the drive can realistically complete. > Since NVMe doesn't guarantee command execution order, it seems the > zoned block version of an NVME HDD would need to support zone append. > Do you agree? Absolutely! -- Martin K. Petersen Oracle Linux Engineering