Re: Migrating RAID1 to new system

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 30Jun2019 21:01, Alex <mysqlstudent@xxxxxxxxx> wrote:
On Fri, Jun 28, 2019 at 8:51 PM Cameron Simpson <cs@xxxxxxxxxx> wrote:
On 28Jun2019 11:03, Alex <mysqlstudent@xxxxxxxxx> wrote:
>I have an older fedora install that I need to upgrade. It has 8 Intel
>SSD 520 Series 240GB disks in there now, mounted on root using an LSI
>SAS 9260-8i controller. There is about 1.3TB usable space.
>
>I need to upgrade it to add more space. If I bought eight 512GB SSDs,
>how do I calculate how much usable space I would have after
>partitioning/formatting using XFS?

Unsure what the overheads of the partitions and XFS are, but they are
small; the RAID setup has a MUCH larger impact on available space. I
would just figure out how much space you lose from the raid config (eg
50% for RAID-1 with 2 drives, N/(N+2) for RAID5 with one parity and one
spare, etc) and round it down a bit.

I should have been more clear - I'm trying to make due with two 2TB
disks to hold the the 1.3TB of data for the interim while I rebuild
the server itself with new SSDs.

I would be very surprised if 1.3TB did not fit on a 2TB volume. Even many small files (eg your email service, if using Maildir folders) would be unlikely to have such a large overhead.

Can you describe your use case? I'm surprised you've got that much space
"as root".  I normally make the OS drive (or RAID set) pretty small,
<10GB, and maintain the larger areas totally separate.

It's a pop/imap/smtp mail server for about two thousand accounts.

Ah, ok.

A smaller scale example than yours, our home server has:

Yes, thank you. This should have been better partitioned when this was
installed many years ago.

This might be an opportunity to improve things. I highly recommend separating the OS drives from the nonOS data drives if that is feasible.

[...]
I've since learned it takes entirely too long to copy 1.3TB to two 2TB
disks. I can't keep the system down that long.

You don't need to.

1: Set up the 2TB filesystem; my strong preference is XFS.

2: "cp -a" the trees into it. Don't worry that they will change during the copy.

3: "rsync -ia --delete" the live volume into the copy. Time this.

4: Schedule downtime.

5: At downtime: shutdown mail services etc. Or even reboot into single user. Even remount the mail volume readonly ("mount -i remount,ro /the/mail/volume"), should be feasible if the services are off.

6: Run the rsync again. Repeat it to ensure that it is clean (empty output).

Your backup is complete. Do the upgrade. If the old raidset FS isn't XFS, I encourage you to seize the opportunity to make it XFS this time.

The downside is still that pulling the data back onto the new volumes will be time consuming anyway.

Caveats:

I'm hoping you do not have hardlinks in the tree to move. If you do, this gets more expensive. You need to use tar|tar or rsync with the -H option to preserve hardlinks - if there are many this is memory intensive (he says, with bitter experience moving an EXTREMELY hardlinked backup tree to a new volume - about to dig into using xfsdump in the future). Happy to recount my experiences here and to discuss alternatives if this is necessary.

You can check this with:

  find /the/voleume/to/copy -type f -links +1 -ls

The copy back downtime is large.

I'm going to have to transfer all the accounts, configuration data,
and user data on these two disks to another interim system with the
production system IPs while I entirely rebuild the new one, copy the
bulk of the data across the network to it, stop all services on both
systems, sync the differences that occurred during the main data
transfer, change the IPs back, then start all the production services.

Do you have the hardware to assemble the new raidset with the new drives and have both online at once (with two machines I suppose)?

If so you can do the cp-then-rsync directly to the new drives without the intermediate 2TB volume. Which means there's no time consuming copy back.

Cheers,
Cameron Simpson <cs@xxxxxxxxxx>
_______________________________________________
users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx



[Index of Archives]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [EPEL Devel]     [Fedora Magazine]     [Fedora Summer Coding]     [Fedora Laptop]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Desktop]     [Fedora Fonts]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Yosemite News]     [Gnome Users]     [KDE Users]     [Fedora Art]     [Fedora Docs]     [Fedora Sparc]     [Libvirt Users]     [Fedora ARM]

  Powered by Linux