Re: Anyone using LVM or ZFS RAID1 for boot drives?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Sun, 12 Feb 2017 22:22:30 -0500 Alex Gorbachev wrote:

> Hello, with the preference for IT mode HBAs for OSDs and journals,
> what redundancy method do you guys use for the boot drives.  Some
> options beyond RAID1 at hardware level we can think of:
>
Not really that Ceph specific, but...

Firstly I wouldn't make journals redundant, the overhead in cost is just
too significant. 
This would be a slightly different story if any future developments with
Bluestore as backing include read caches, which could benefit performance
wise from being on a RAID1. 
 
> - LVM
> 
Not really, but MD RAID1 works like charm and is WELL tested. Also
supports TRIM if your OS drives are SSDs.

> - ZFS RAID1 mode
> 
If you're comfortable with that, have the kernels at hand, etc.

> - SATADOM with dual drives
>
See MD, but you want to be ABSOLUTELY sure those can handle things in terms
of speed and endurance. Worn out SATADOMs tend to mean shutting down the
whole node...
Some of the Supermicro SATADOMs should fit the bill, at least on paper.

> - Single SSD like the journal drives, since they'd fail about the time
> when journals fail
> 
I prefer to have my OS (and the MON data) to be HA, but if you have
working SDS (chef, puppet etc, I dislike them all for different reasons),
a failed OS and thus node may be a minor inconvenience.

Christian

> Any other solutions?
> 
> Thank you for sharing.
> 
> --
> Alex Gorbachev
> Storcium
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux