Re: Disk failures

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2016-06-09 9:16 GMT+02:00 Christian Balzer <chibi@xxxxxxx>:
> Neither, a journal failure is lethal for the OSD involved and unless you
> have LOTS of money RAID1 SSDs are a waste.

Ok, so if a journal failure is lethal, ceph automatically remove the
affected OSD
and start rebalance, right ?

> Additionally your cluster should (NEEDS to) be designed to handle the
> loss of a journal SSD and its associated OSDs, since that is less than a
> whole node, or a whole rack (whatever your failure domain may be).

What do you suggest about this? In the (small) cluster i'm trying to plan,
I would like to be protected on every component up to the whole rack.
I have 2 different racks for the storage, so data should be spread across both
and still keep the single OSD/Journal failure as failure domain

Yes, reading docs should answer to many questions (and I'm reading), but having
a mailing list where expert people reply is much better.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux