Re: Anyone using LVM or ZFS RAID1 for boot drives?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 13-2-2017 04:22, Alex Gorbachev wrote:
> Hello, with the preference for IT mode HBAs for OSDs and journals,
> what redundancy method do you guys use for the boot drives.  Some
> options beyond RAID1 at hardware level we can think of:
> 
> - LVM
> 
> - ZFS RAID1 mode

Since it is not quite Ceph, I take the liberty to answer with a bit not
Linux. :)

On FreeBSD I always use RAID1 bootdisks, it is natively supoorted from
both kernel and installer. Fits really nice with the upgrading tools,
allowing it to roll-back if upgrades did not work, or to boot with one
of the previous snapshotted bootdisks.

NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH
zfsroot      228G  2.57G   225G         -     0%     1%  1.00x  ONLINE
  mirror     228G  2.57G   225G         -     0%     1%
    ada0p3      -      -      -         -      -      -
    ada1p3      -      -      -         -      -      -

zfsroot               2.57G   218G    19K  /zfsroot
zfsroot/ROOT          1.97G   218G    19K  none
zfsroot/ROOT/default  1.97G   218G  1.97G  /
zfsroot/tmp           22.5K   218G  22.5K  /tmp
zfsroot/usr            613M   218G    19K  /usr
zfsroot/usr/compat      19K   218G    19K  /usr/compat
zfsroot/usr/home        34K   218G    34K  /usr/home
zfsroot/usr/local      613M   218G   613M  /usr/local
zfsroot/usr/ports       19K   218G    19K  /usr/ports
zfsroot/usr/src         19K   218G    19K  /usr/src
zfsroot/var            230K   218G    19K  /var
zfsroot/var/audit       19K   218G    19K  /var/audit
zfsroot/var/crash       19K   218G    19K  /var/crash
zfsroot/var/log        135K   218G   135K  /var/log
zfsroot/var/mail        19K   218G    19K  /var/mail
zfsroot/var/tmp         19K   218G    19K  /var/tmp

Live maintenance is also a piece of cake with this.

If on a server SSDs are used, then I add a bit of cache. But as you see
the root stuff, include /usr and likes, is only 2.5Gb. And the most used
part will be in ZFS ARC, certainly if you did not save cost on RAM.

--WjW




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux