Re: Ceph and RAID

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



One good thing about raid controller is you have hot swap capability that you don't have with a ceph node that just has a disk hba.

Don't you have to take down a ceph node to replace defective drive? If I have a ceph node with 12 disks and one goes bad, would I not have to take the entire node down to replace and then reformat?

If I have a hotswap chassis but using just an hba to connect my drives will the os (say latest Ubuntu)  support hot-swapping the drive or do I have to shut it down to replace the drive then bring ip and format etc.

Not a linux guy so if I'm mistaken let me know.

Thanks!


On 10/3/2013 12:13 PM, Scott Devoid wrote:
An additional side to the RAID question: when you have a box with more drives than you can front with OSDs due to memory or CPU constraints, is some form of RAID advisable? At the moment "one OSD per drive" is the recommendation, but from my perspective this does not scale at high drive densities (e.g 10+ drives per U).



On Thu, Oct 3, 2013 at 11:08 AM, John-Paul Robinson <jpr@xxxxxxx> wrote:
What is this take on such a configuration?

Is it worth the effort of tracking "rebalancing" at two layers, RAID
mirror and possibly Ceph if the pool has a redundancy policy.  Or is it
better to just let ceph rebalance itself when you lose a non-mirrored disk?

If following the "raid mirror" approach, would you then skip redundency
at the ceph layer to keep your total overhead the same?  It seems that
would be risky in the even you loose your storage server with the
raid-1'd drives.  No Ceph level redunancy would then be fatal.  But if
you do raid-1 plus ceph redundancy, doesn't that mean it takes 4TB for
each 1 real TB?

~jpr

On 10/02/2013 10:03 AM, Dimitri Maziuk wrote:
> I would consider (mdadm) raid-1, dep. on the hardware & budget,
> because this way a single disk failure will not trigger a cluster-wide
> rebalance.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux