On 28Jun2019 11:03, Alex <mysqlstudent@xxxxxxxxx> wrote:
I have an older fedora install that I need to upgrade. It has 8 Intel
SSD 520 Series 240GB disks in there now, mounted on root using an LSI
SAS 9260-8i controller. There is about 1.3TB usable space.
I need to upgrade it to add more space. If I bought eight 512GB SSDs,
how do I calculate how much usable space I would have after
partitioning/formatting using XFS?
Unsure what the overheads of the partitions and XFS are, but they are
small; the RAID setup has a MUCH larger impact on available space. I
would just figure out how much space you lose from the raid config (eg
50% for RAID-1 with 2 drives, N/(N+2) for RAID5 with one parity and one
spare, etc) and round it down a bit.
Can you describe your use case? I'm surprised you've got that much space
"as root". I normally make the OS drive (or RAID set) pretty small,
<10GB, and maintain the larger areas totally separate.
A smaller scale example than yours, our home server has:
- / on an onboard SD card, current 4GB but that is way too small now; I
need to do a fresh install with a bigger card - my attempted at
copying it to a large card and changing partition sizes have been
catastrophic unbootable failures, largely due to grub being a hostile
counterintuitive POS - unsure how much of that I should really blame
on the historic IBM PC architecture, but grub's documentation doesn't
help
- /home and some swap on a 500GB SSD
- /app8tb which is a RAID1 of 2 8TB SATA drives, which is largely the
media server storage and scratch space
This means that I've got some physical separation of the OS from the
other areas. When I rebuilt this machine I'll just pull out the "/" SD
card, put in a better and bigger one, and install a more modern release.
No mucking with the other drives at all.
Can I safely use XFS on the root
partition?
Yes.
My strategy to upgrade the system using the eight new 512GB SSDs would be:
- Add two regular 2TB disks as RAID1 to the existing system
As hardware RAID or mdadm software RAID? Guessing the latter?
- Copy the 1.3TB of user data onto it
Definitely. You can see my setup sidesteps this requirement. But you've
clearly got a different arrangement.
- Remove the eight existing 240GB SSDs
- Install the eight new 512GB SSDs
- Install fedora30 onto the new system
- Migrate the old configs from backup onto the new system
- Mount the two RAID1 drives onto the new system
- Copy data from RAID1 array to new system
Sounds sound to me.
Do I need to migrate the RAID1 config, or will mdadm figure it out on its own?
Mdadm figures this stuff out. It should recompose the software raidsets
automatically. You will have to hand mount the /dev/mdX partitions
yourself of course.
Anything else I should watch for?
It is very useful to use the "-L label" option with any filesystems you
hand make - then you can use LABEL= in the fstab for mounting. Modern
installs also give filesystems UUIDs, which are more unique, but I find
them personally hard to use because they are not memorable.
I recently discovered the "lsblk -f" command, VERY useful for seeing
devices, and filesystems and their mountedness.
I think my main argument here is that you should try to have separate
media for the OS (and small SSD or something) so that you can do a
complete reinstall with minimal interference and risk to your nonOS
data.
Cheers,
Cameron Simpson <cs@xxxxxxxxxx>
_______________________________________________
users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx