On 03/31/2016 09:29 PM, Jens Axboe wrote:
I can't seem to reproduce this at all. On an nvme device, I get a
fairly steady 60K/sec file creation rate, and we're nowhere near
being IO bound. So the throttling has no effect at all.
That's too slow to show the stalls - your likely concurrency bound
in allocation by the default AG count (4) from mkfs. Use mkfs.xfs -d
agcount=32 so that every thread works in it's own AG.
That's the key, with that I get 300-400K ops/sec instead. I'll run some
testing with this tomorrow and see what I can find, it did one full run
now and I didn't see any issues, but I need to run it at various
settings and see if I can find the issue.
No stalls seen, I get the same performance with it disabled and with it
enabled, at both default settings, and lower ones (wb_percent=20).
Looking at iostat, we don't drive a lot of depth, so it makes sense,
even with the throttling we're doing essentially the same amount of IO.
What does 'nr_requests' say for your virtio_blk device? Looks like
virtio_blk has a queue_depth setting, but it's not set by default, and
then it uses the free entries in the ring. But I don't know what that is...
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html