Re: C7: How to configure raid at install time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On Wed, 25 Nov 2015, Gordon Messmer wrote:

>I really recommend using the fewest partitions possible, 

>replacing
>a disk will require you to handle each partition individually.  

This is not a large burden, but I do agree to keep it simple.

>It's probably best to do a small /boot on RAID1 and use the rest of the 
>disk for a second RAID1 volume, with LVM on that.

The default configuration is a partition for /boot and one per drive for 
LVM PVs all of which are used by a single VG, in which LVs for / (which 
has a fixed upper size), swap (which uses a computed size) and if 
there's space available /home (which uses all remaining space).

And there's the problem, the installer uses all available space leaving 
none for a RAID of /boot/.

I believe this is due to an assumption that Red Hat makes, that their 
customers are mostly enterprise and thus the drives that the installer 
sees would be logical and backed by RAID.

I could wish that the installer would RAID /boot by default, which is 
small enough that it is unlikely to be a burden for those with RAID 
backed "drives" and a boon to those who do not.

What's needed is enough space for the installer to carve out additional 
partitions on the additional drives.  Which means manual partitioning, 
and in particular you must delete or resize /home -- remember, this is 
in the initial configuration so there's nothing to lose, and you can 
create/resize it a little later anyway.  If you have very small disks 
you probably need to shrink / and/or swap.

Reduce the space to be used by enough for another /boot and you can 
change /boot's device type to one of the RAIDs.

Reduce it to no more than what the first drive provides and you can also 
setup RAID under the PV -- modify the volume group redundancy.

After which you can make mount points for anything else, e.g., /var/www, 
and grow what you shrank.

If you like pain you can back into such a layout after installation is 
complete, but I do not recommend it.

When there are more than 2 drives possible I prefer MD RAID for /boot 
and LVM RAID for everything else -- the not likeable part is that even 
though LVM RAID leverages MD RAID the status is not presented via 
/proc/mdstat, which changes how one must monitor for failures.  When 
only 2 drives are possible odds are you want RAID1 and it might as well 
be under the PV.

Using LVM RAID is interesting in that it allows you to decide later 
without much effort the redundancy you want on a per mount point basis, 
e.g., on a 4 drive system it is just two commands to create mount points 
(LVs) using RAID10 for /var/lib/*sql but RAID5 for /var/www -- that's 
not impossible to do with MD RAID under PVs but it probably means lots 
more commands and even some juggling.


/mark
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
https://lists.centos.org/mailman/listinfo/centos



[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux