On Tue, 2017-01-31 at 13:13 +0100, Jan Kurik wrote: > = Proposed Self Contained Change: Anaconda LVM RAID = > https://fedoraproject.org/wiki/Changes/AnacondaLVMRAID > > Change owner(s): > * Vratislav Podzimek (Anaconda/Blivet) <vpodzime AT redhat DOT com> > * Heinz Mauelshagen (LVM) <heinzm AT redhat DOT com> > > Use LVM RAID instead of LVM of top of MD RAID in the Anaconda > installer. > > > == Detailed Description == > In the current situation when a user chooses LVM (or Thin LVM) > partitioning in the Custom Spoke and then sets RAID level for the VG > Anaconda (and Blivet) create an MD RAID device which is used as a PV > for the VG. With this change we are going to use LVM RAID directly > instead. That means that all the LVs in that VG will be RAID LVs with > the specified RAID level. LVM RAID provides same functionality as MD > RAID (it shares the same kernel code) with better flexibility and > additional features expected in future. > > > > == Scope == > * Proposal owners: > -- Blivet developers: Support creation of LVM RAID in a similar way > as > LVM on top of MD RAID. (Creation of RAID LVs is already supported.) > -- Anaconda developers: Use the new way to create LVM RAID instead of > creating LVM on top of MD RAID. > -- LVM developers: LVM RAID already has all features required by this > change. > > * Other developers: > N/A (not a System Wide Change) > > * Release engineering: Please ensure upgrades of systems using MD RAID are properly tested. My server at home broke on upgrading to Fedora 22 (#1201962), and also on upgrading to Fedora 20 before that (IIRC). This implies that even when MD RAID was still being used by default, upgrades weren't very well-tested. With a move away from MD to LVM RAID, I'm concerned that things will only get worse. So let's please ensure that we have proper test coverage for existing systems.
<<attachment: smime.p7s>>
_______________________________________________ devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx