Re: SSD considerations for block.db and WAL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Christian;

What is your failure domain?  If your failure domain is set to OSD / drive, and 2 OSDs share a DB / WAL device, and that DB / WAL device dies, then portions of the data could drop to read-only (or be lost...).

Ceph is really set up to own the storage hardware directly.  It doesn't (usually) make sense to put any kind of RAID / JBOD between Ceph and the hardware.

Thank you,

Dominic L. Hilsbos, MBA 
Director - Information Technology 
Perform Air International Inc.
DHilsbos@xxxxxxxxxxxxxx 
www.PerformAir.com



-----Original Message-----
From: Christian Wahl [mailto:wahl@xxxxxxxx] 
Sent: Thursday, February 27, 2020 12:09 PM
To: ceph-users@xxxxxxx
Subject:  SSD considerations for block.db and WAL


Hi everyone,

we currently have 6 OSDs with 8TB HDDs split across 3 hosts.
The main usage is KVM-Images.

To improve speed we planned on putting the block.db and WAL onto NVMe-SSDs.
The plan was to put 2x1TB in each host.

One option I thought of was to RAID 1 them for better redundancy, I don't know how high the risk is of corrupting the block.db by one failed SSD block.
Or should I just one for WAL+block.db and use the other one as fast storage?

Thank you all very much!

Christian
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux