Re: What is better a 2nd drive for Raid 1 or a backup one?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sunday, December 29, 2019 12:54:48 PM MST Chris Murphy wrote:
> On Sat, Dec 28, 2019 at 2:01 PM Roberto Ragusa <mail@xxxxxxxxxxxxxxxx>
> wrote:
> >
> >
> > On 12/26/19 10:54 PM, Chris Murphy wrote:
> > 
> > > On Tue, Dec 24, 2019 at 2:56 PM Cameron Simpson <cs@xxxxxxxxxx> wrote:
> > > 
> > >>
> > >>
> > >> Oh yes, one more thing. If you do the RAID1 thing: either make a shiny
> > >> new RAID1 and copy to it, or practice the transition with test drives.
> > >> Do not risk your high value data by trying to "in place RAID1" it.
> > >
> > >
> > >
> > > I'm not sure if this is even possible with mdadm or lvm. For sure they
> > > have no way of knowing which mirror is correct. But even if it's
> > > possible, it's a bit complicated because it implies repartitioning in
> > > order to make room for the necessary metadata area.
> >
> >
> >
> > The "which copy is correct" problem is solvable:
> > 1) you can create a 1-disk RAID1 on the partition with the correct data
> > (it will tell you your config is stupid but you can force it)
> > 2) you then tell mdadm to change the number of drives to 2 (mdadm
> > --grow);
> > this will be a 2-disk RAID1 working in 1-disk degraded mode
> > 3) you then tell mdadm you have a new drive for that RAID1 (mdadm --add);
> > sync from 1st disk to 2nd will begin
> >
> >
> >
> > The metadata area problem is a bit tricky, but it is not necessary
> > to repartition, just make your filesystem a bit smaller than it is now.
> > So:
> > a) unmount the filesystem
> > b) resize the filesystem to 100MB smaller (actually you need just a few
> > kB,
 but let's play very very safe, we will get the space back later)
> > c) create the RAID etc., the steps described above, so 1) 2) 3)
> > d) resize the filesystem without any size parameter (i.e. let it expand
> > to
> > occupy the 99.9MB of extra space you have on the RAID device)
> 
> 
> This requires mdadm metadata format 1.0 being explicitly chosen when
> creating the array. The default format, 1.2, uses a 4K offset from the
> start, the mdadm superblock is 256 bytes, and then there's a ~65MiB
> gap before the start of the array.
> 
> 
> 
> > In any phase after 1) you can also mount the filesystem again (from
> > /dev/md*),
 since all the rest can be done on a mounted filesystem (no
> > problem with d) too).>
> >
> >
> > There are some things to notice:
> > - step b) requires a filesystem that supports shrinking; this can be done
> > with
 ext4 but it is not supported on xfs (BTW, this is why I refuse to
> > consider xfs a serious filesystem)
> > - step d) can be done on both a mounted or unmounted filesystem for ext4
> > but can only be done on a mounted fileystem ox xfs (another reason why I
> > do not
 like xfs)
> > - RAID creation in step 1) must be done with a --metadata option that
> > forces the
 metadata at the END of the space (so 0.90 or 1.0), since you
> > are not going to shift all your data forward to have space at the
> > beginning
> >
> >
> >
> > At the end of day, it can be done, but you really have to know what you
> > are doing,
 a small error can lead to a disaster. I would do that only on
> > data I have a backup already, or at least I would try all the procedure
> > on a small test filesystem before doing it on the real stuff.
> 
> 
> It could be easier and safer to do this with LVM: pvcreate-> vgextend
> -> lvconvert.
> 
> However, by default, the installer uses all the VG space for root,
> home, and swap. And LVM itself doesn't hold any space in reserve for
> future use by lvconvert, not for convert from thick to thin
> provisioning, or from linear to raid type. The lvconvert to type raid1
> needs to create metadata subvolumes on each physical device or the
> lvconvert command fails. So yeah you probably end up needing to do an
> fs shrink here too, and it rapidly gets esoteric. You could swapoff,
> blow away the swap LV, do the lvconvert, and create a new (very
> slightly smaller) swap LV and format it - that way you don't have to
> unmount any ext4 volumes.
> 
> In comparison, this is a lot more straightforward on Btrfs.
> 
> -- 
> Chris Murphy

If that's the case for LVM, then it seems that mdadm would be the easier and 
safer option. You simply create an array with a missing disk, copy the data 
over, then add the existing disk to the array.

- -
John M. Harris, Jr.



_______________________________________________
users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx



[Index of Archives]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [EPEL Devel]     [Fedora Magazine]     [Fedora Summer Coding]     [Fedora Laptop]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Desktop]     [Fedora Fonts]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Yosemite News]     [Gnome Users]     [KDE Users]     [Fedora Art]     [Fedora Docs]     [Fedora Sparc]     [Libvirt Users]     [Fedora ARM]

  Powered by Linux