On Wed, Dec 12, 2018 at 10:45:52AM -0500, Josef Bacik wrote: > The original test just did 4g of IO and figured out how long it took to > determine if io.latency was working properly. However this can run > really long on slow disks, so instead run for a constant time and check > the bandwidth of the two cgroups to determine if io.latency is doing the > right thing. > > Signed-off-by: Josef Bacik <josef@xxxxxxxxxxxxxx> Thanks, this is better. This still fails on my emulated NVMe device on QEMU with "Too much of a performance drop for the protected workload". The total I/O goes down from about 2.4 GB to 1.8 GB, so about 75%. It works reliably on virtio-blk and virtio-scsi. Is the device hopeless, or should the configuration/thresholds be tweaked?