Re: Help with Bluestore WAL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
    We were recently testing luminous with bluestore. We have 6 node cluster with 12 HDD and 1 SSD each, we used ceph-volume with LVM to create all the OSD and attached with SSD WAL (LVM ). We create individual 10GBx12 LVM on single SDD for each WAL. So all the OSD WAL is on the singe SSD. Problem is if we pull the SSD out, it brings down all the 12 OSD on that node. Is that expected behavior or we are missing any configuration ?


Yes, you should plan your failure domain, i.e. what will be happens with your cluster if one backend ssd suddenly dies.

Also you should plan mass failures of your ssd/nvme, so rule of thumb - don't overload your flash backend with osd. Recommend is ~4 osd per ssd/nvme.



k

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux