[...] > Device r/s w/s rkB/s wkB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util > nvme0n1 1317510.00 0.00 5270044.00 0.00 0.00 0.00 0.00 0.00 0.31 0.00 411.95 4.00 0.00 0.00 100.40 [...] > Device r/s w/s rkB/s wkB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util > nvme0n1 114589.00 0.00 458356.00 0.00 0.00 0.00 0.00 0.00 0.29 0.00 33.54 4.00 0.00 0.01 100.00 The obvious difference is the factor of 10 in "aqu-sz" and that correspond to the factor of 10 in "r/s" and "rkB/s". I have noticed that the MD RAID is does some weird things to the queueing, it is not a "normal" block device, and this often creates bizarrities (happens also with DM/LVM2). Try to create a filesystem on top of 'md0' and 'md1' and test that, things may be quite different.