On Wed, Jan 27 2016 at 12:56pm -0500, Sagi Grimberg <sagig@xxxxxxxxxxxxxxxxxx> wrote: > > > On 27/01/2016 19:48, Mike Snitzer wrote: > > > >BTW, I _cannot_ get null_blk to come even close to your reported 1500K+ > >IOPs on 2 "fast" systems I have access to. Which arguments are you > >loading the null_blk module with? > > > >I've been using: > >modprobe null_blk gb=4 bs=4096 nr_devices=1 queue_mode=2 submit_queues=12 > > $ for f in /sys/module/null_blk/parameters/*; do echo $f; cat $f; done > /sys/module/null_blk/parameters/bs > 512 > /sys/module/null_blk/parameters/completion_nsec > 10000 > /sys/module/null_blk/parameters/gb > 250 > /sys/module/null_blk/parameters/home_node > -1 > /sys/module/null_blk/parameters/hw_queue_depth > 64 > /sys/module/null_blk/parameters/irqmode > 1 > /sys/module/null_blk/parameters/nr_devices > 2 > /sys/module/null_blk/parameters/queue_mode > 2 > /sys/module/null_blk/parameters/submit_queues > 24 > /sys/module/null_blk/parameters/use_lightnvm > N > /sys/module/null_blk/parameters/use_per_node_hctx > N > > $ fio --group_reporting --rw=randread --bs=4k --numjobs=24 > --iodepth=32 --runtime=99999999 --time_based --loops=1 > --ioengine=libaio --direct=1 --invalidate=1 --randrepeat=1 > --norandommap --exitall --name task_nullb0 --filename=/dev/nullb0 > task_nullb0: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, > ioengine=libaio, iodepth=32 > ... > fio-2.1.10 > Starting 24 processes > Jobs: 24 (f=24): [rrrrrrrrrrrrrrrrrrrrrrrr] [0.0% done] > [7234MB/0KB/0KB /s] [1852K/0/0 iops] [eta 1157d:09h:46m:22s] Thanks, the number of fio threads was pretty important. I'm still seeing better IOPs with queue_mode=0 (bio-based). Jobs: 24 (f=24): [r(24)] [11.7% done] [11073MB/0KB/0KB /s] [2835K/0/0 iops] [eta 14m:42s] (with queue_mode=2 I get ~1930K IOPs.. which I need to use to stack request-based DM multipath ontop) Now I can focus on why dm-multipath is slow... -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel