Re: Performance, and how much wiggle room there is with tunables

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



ceph osd pool create scbench 100 100
rados bench -p scbench 10 write --no-cleanup
rados bench -p scbench 10 seq


On Mon, Nov 13, 2017 at 1:28 AM, Rudi Ahlers <rudiahlers@xxxxxxxxx> wrote:
Would you mind telling me what rados command set you use, and share the output? I would like to compare it to our server as well. 

On Fri, Nov 10, 2017 at 6:29 AM, Robert Stanford <rstanford8896@xxxxxxxxx> wrote:

 In my cluster, rados bench shows about 1GB/s bandwidth.  I've done some tuning:

[osd]
osd op threads = 8
osd disk threads = 4
osd recovery max active = 7


I was hoping to get much better bandwidth.  My network can handle it, and my disks are pretty fast as well.  Are there any major tunables I can play with to increase what will be reported by "rados bench"?  Am I pretty much stuck around the bandwidth it reported?

 Thank you

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Kind Regards
Rudi Ahlers
Website: http://www.rudiahlers.co.za

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux