ceph osd pool create scbench 100 100
rados bench -p scbench 10 write --no-cleanup
rados bench -p scbench 10 seq
rados bench -p scbench 10 write --no-cleanup
rados bench -p scbench 10 seq
On Mon, Nov 13, 2017 at 1:28 AM, Rudi Ahlers <rudiahlers@xxxxxxxxx> wrote:
Would you mind telling me what rados command set you use, and share the output? I would like to compare it to our server as well.On Fri, Nov 10, 2017 at 6:29 AM, Robert Stanford <rstanford8896@xxxxxxxxx> wrote:______________________________In my cluster, rados bench shows about 1GB/s bandwidth. I've done some tuning:[osd]
osd op threads = 8
osd disk threads = 4
osd recovery max active = 7I was hoping to get much better bandwidth. My network can handle it, and my disks are pretty fast as well. Are there any major tunables I can play with to increase what will be reported by "rados bench"? Am I pretty much stuck around the bandwidth it reported?Thank you_________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com