Optimal OSD Configuration for 45 drives?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Christian.

 Our current setup has 4 osd's per node.    When a drive  fails   the
cluster is almost unusable for data entry.   I want to change pour set up
so that under no circumstances ever happens.    We used drbd for 8 years,
and our main concern is high availability .  1200bps  Modem speed feel cli
does not count as available.
 Network:  we use 2 IB switches and  bonding in fail over mode.
 Systems are two  Dell Poweredge r720 and Supermicro X8DT3 .

 So looking at how to do things better we will try  #4 anti-cephalopod.

We'll switch to using raid-10 or raid-6 and have one osd per node, using
high end raid controllers,  hot spares etc.

And use one Intel 200gb S3700 per node for journal

My questions:

is there a minimum number of OSD's which should be used?

should  OSD's per node be the same?

best regards, Rob
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140725/34d7e74f/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux