Performance, and how much wiggle room there is with tunables

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




 In my cluster, rados bench shows about 1GB/s bandwidth.  I've done some tuning:

[osd]
osd op threads = 8
osd disk threads = 4
osd recovery max active = 7


I was hoping to get much better bandwidth.  My network can handle it, and my disks are pretty fast as well.  Are there any major tunables I can play with to increase what will be reported by "rados bench"?  Am I pretty much stuck around the bandwidth it reported?

 Thank you
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux