> -----Original Message----- > From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of > Ilya Dryomov > Sent: 04 June 2015 09:21 > To: Nick Fisk > Cc: ceph-users > Subject: Re: krbd and blk-mq max queue depth=128? > > On Wed, Jun 3, 2015 at 8:03 PM, Nick Fisk <nick@xxxxxxxxxx> wrote: > > > > Hi All, > > > > > > > > Am I correct in thinking that in latest kernels, now that krbd is > > supported via blk-mq, the maximum queue depth is now 128 and cannot > be > > adjusted > > > > > > > > > http://xo4t.mj.am/link/xo4t/jw0u7zr/1/VnVTVD2KMuL7gZiTD1iRXQ/aHR0cH > M6L > > > y9naXRodWIuY29tL3RvcnZhbGRzL2xpbnV4L2Jsb2IvbWFzdGVyL2RyaXZlcnMv > YmxvY2s > > vcmJkLmM > > > > 3753: rbd_dev->tag_set.queue_depth = BLKDEV_MAX_RQ; > > > > > > > > blkdev.h > > > > 42: #define BLKDEV_MAX_RQ 128 > > > > > > > > This potentially seems a bit low for some use cases if it can’t be adjusted. > > Yeah, the default 128 is the same as before but it can't be adjusted now (or > rather it can be, through /sys/block/rbd0/queue/nr_requests, > but not upwards). Given that we only have one queue and that the > conversion to blk-mq was done mostly for maintenance reasons we should > probably look into allowing users to bump it, but it would be great if we could > see some numbers first. 64k random writes via fio rbd engine QD=128 = ~3000iops QD=256 = ~4500iops (but my 40 disks are well past saturated at this point) > > Thanks, > > Ilya > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com