Re: Case where a separate Bluestore WAL/DB device crashes...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This aspect of osds has not changed from filestore with SSD journals to bluestore with DB and WAL soon SSDs. If the SSD fails, all osds using it aren't lost and need to be removed from the cluster and recreated with a new drive.

You can never guarantee data integrity on bluestore or filestore if any media of the osd fails completely.

On Thu, Mar 1, 2018, 10:24 AM Hervé Ballans <herve.ballans@xxxxxxxxxxxxx> wrote:
Hello,

With Bluestore, I have a couple of questions regarding the case of
separate partitions for block.wal and block.db.

Let's take the case of an OSD node that contains several OSDs (HDDs) and
also contains one SSD drive for storing WAL partitions and an another
one for storing DB partitions. In this configuration, from my
understanding (but I may be wrong), each SSD drive appears as a SPOF for
the entire node.

For example, what happens if one of the 2 SSD drives crashes (I know,
it's very rare but...) ?

In this case, are the bluestore data on all the OSDs of the same node
also lost ?

I guess so, but as a result, what is the recovery scenario ? Will it be
necessary to entirely recreate the node (OSDs + block.wal + block.db) to
rebuild all the replicas from the other nodes on it ?

Thanks in advance,
Hervé

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux