ceph osd crush tunables optimal AND add new OSD at the same time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sage & List

I understand this is probably a hard question to answer.

I mentioned previously our cluster is co-located MON?s on OSD servers, which are R515?s w/ 1 x AMD 6 Core processor & 11 3TB OSD?s w/ dual 10GBE.

When our cluster is doing these busy operations and IO has stopped as in my case, I mentioned earlier running/setting tuneable to optimal or heavy recovery
operations is there a way to ensure our IO doesn?t get completely blocked/stopped/frozen in our vms?

Could it be as simple as putting all 3 of our mon servers on baremetal  w/ssd?s? (I recall reading somewhere that a mon disk was doing several thousand IOPS during a recovery operation)

I assume putting just one on baremetal won?t help because our mon?s will only ever be as fast as our slowest mon server?

Thanks,
Quenten
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140717/3e3e6bd8/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux