Re: mirroring existing boot drive sanity check

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le 09/08/2022 à 16:50, David T-G a écrit :

I have an existing 128G SSD.

   Disk /dev/sda: 122104MiB
   Sector size (logical/physical): 512B/512B
   Partition Table: gpt
   Disk Flags: pmbr_boot

   Number  Start     End        Size      File system     Name Flags
           0.02MiB   1.00MiB    0.98MiB   Free Space
    1      1.00MiB   33793MiB   33792MiB  linux-swap(v1)  diskfarm-swap swap
    2      33793MiB  66561MiB   32768MiB  xfs             diskfarmsuse
    3      66561MiB  99329MiB   32768MiB                  diskfarmknop  legacy_boot
    4      99329MiB  122104MiB  22775MiB  xfs             diskfarm-ssd

I have obtained a shiny new 256G SSD to use as a mirror.
My final-view plan is, in
fact, to replace the 128 with another 256 and grow the -ssd data partition.

For a typical mirror-an-existing, I think that I need to create all of my
slices and the [degraded] mirror on the new, copy over the old, boot from new,
and then treat old as just another disk to shove in.  There's the question of
making partitions larger for the RAID superblock info
If you choose to copy existing block device content (with dd or the like) into a RAID array, then the RAID array device size must be at least the same this, which implies that the RAID member devices is sligthly bigger to take the RAID superblock into account.

If you choose to copy filesystem content (with cp, rsync or the like) into a new filesystem, then you only need the RAID array device to be big enough to fit the content.

As you can see, I have no free space on the little guy.

Actually no, we cannot see. We can only see that there is no free space outside the partitions. But we cannot see if there is any free space inside the partitions.

what do I do with the old guy?

Do whatever you like with the old drive except using it in the RAID array. Why bother doing this and having to resize the RAID array when you add the 2nd new drive ? Resizing a RAID array is a pain in the ass. Just build the RAID array on the 2 new drives from the start.

if I'm essentially starting from scratch, should I
mirror the entire [yes, identical] drive and partition the metadevice,
*BSD-style, or mirror individual partitions?

IMO a single RAID array is simpler. If your distribution supports it, you can either partition it with a partition table or use it as a LVM physical volume and create logical volumes.

However I do not think it is possible to cleanly boot from an unpartitioned drive used as a software RAID member, as a RAID capable boot loader could hardly fit in the 4-KiB area before the RAID superblock. So you still have to create a partition table on the raw drives. Also, if you use GPT format and GRUB boot loader, you need to create a small (100 kB to 1 MB) partition with type "BIOS boot" (or libparted bios_grub flag).



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux