Procedure to grow raid1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Planning on putting in larger drives in my raid 1 backup server.
I think I know what to do but as it requires a bit of a drive I want to do it right the first time :)


Currently have 2 200gb sata drives.
Changing to 300gb drives.

cat /proc/mdstat
Personalities : [raid1] [raid5]
md0 : active raid1 sda1[0] sdb1[2]
     586240 blocks [3/2] [U_U]

md2 : active raid1 sda2[0] sdb2[1]
     194771968 blocks [3/2] [UU_]

md0  is /
md2 is rest of drives and lvm2

Shutdown replace sdb with 300gb drive.

fdisk making a sdb1 a little larger and sdb2 the remainder of drive. both type fd.

mdadm /dev/md0 -a /dev/sdb1
mdadm /dev/md2 -a /dev/sdb2

Go for long lunch ;-)

put grub on /dev/sdb

Shutdown  replace  /dev/sda with 300g drive.

Boot or do what is required to get it to boot.

mdadm /dev/md0 --grow --size=max
mdadm /dev/md0 --grow --size=max

Boot from rescue cd

resize2fs /dev/md0

reboot
Next comes the part that scares me :(
Need to extend the lvm pv.
My guess I run vgcfgbackup

Edit the dump

# Generated by LVM2: Tue May 24 11:41:45 2005

contents = "Text Format Volume Group"
version = 1

description = "Created *after* executing 'vgcfgbackup'"

creation_host = "fonbackup"     # Linux fonbackup 2.6.11-p4smp #1 SMP Tue Apr 19 14:45:37 C
DT 2005 i686
creation_time = 1116952905      # Tue May 24 11:41:45 2005

vg1 {
       id = "KB6h2q-yCFm-EOyx-HDWe-ToIK-Fy72-UEg4US"
       seqno = 8
       status = ["RESIZEABLE", "READ", "WRITE"]
       extent_size = 8192              # 4 Megabytes
       max_lv = 0
       max_pv = 0

       physical_volumes {

               pv0 {
                       id = "e4GFl5-ri5o-ezD0-bKpl-b8fU-s4RO-Cqz5br"
                       device = "/dev/md2"     # Hint only

                       status = ["ALLOCATABLE"]
                       pe_start = 384
                       pe_count = 47551        # 185.746 Gigabytes
               }
       }
.......... more stuff.................

I figure the trick is to enter the correct value for pe_count
Figure it's in 512 byte sectors.

On the current drives
blockdev --getsize /dev/sda2
389544120
389544120/8192= 47551.772460938


So should take the number of blocks divide by 8192 and use that.

Then run vgcfgrestore.

Any thing I should do to make sure it is all correct?

Then partition and add the second drive.

Thanks

John


- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux