anti-cephalopod question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've a question regarding advice from these threads:
https://mail.google.com/mail/u/0/#label/ceph/1476b93097673ad7?compose=1476ec7fef10fd01

https://www.mail-archive.com/ceph-users at lists.ceph.com/msg11011.html



 Our current setup has 4 osd's per node.    When a drive  fails   the
cluster is almost unusable for data entry.   I want to change our set up so
that under no circumstances ever happens.

 Network:  we use 2 IB switches and  bonding in fail over mode.
 Systems are two  Dell Poweredge r720 and Supermicro X8DT3 .

 So looking at how to do things better we will try  '#4- anti-cephalopod'
.   That is a seriously funny phrase!

We'll switch to using raid-10 or raid-6 and have one osd per node, using
high end raid controllers,  hot spares etc.

And use one Intel 200gb S3700 per node for journal

My questions:

is there a minimum number of OSD's which should be used?

should  OSD's per node be the same?

best regards, Rob


PS:  I had asked above in middle of another thread...  please ignore there.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140725/ba09de12/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux