RE: Raid 5 to Raid 1 (half of the data not required)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On Wed, 24 Aug 2011 18:42:35 +1000 <neilb@xxxxxxx> wrote:
> > On Wed, 24 Aug 2011 10:21:32 +0200 (CEST) Mikael Abrahamsson <swmike@xxxxxxxxx> wrote:
> > On Wed, 24 Aug 2011, Gordon Henderson wrote:
> > 
> > > This isn't as "glamorous" as Neils method involving lots of mdadm 
> > > commands, shrinks and grows, but sometimes it's good to keep things at a 
> > > simpler level?
> > 
> > Another way would be to add the new raid1 with missing drive to the lv, 
> > and pvmove all extents off of the existing raid5 md pv, then vgreduce away 
> > from it, stop the raid5, zero-superblock, and add one drive to add 
> > redundancy for the raid1.
> > 
> > But that has little to do with linux raid, and all to do with LVM. It also 
> > means you can do everything online since pvmove doesn't require to offline 
> > anything.
> > 
> 
> There are certainly lots of approaches. :-)
> But every approach will require either coping or shrinking the filesystem and
> as extX doesn't support online shrinking the filesystem will have to be
> effectively off-line while that shrink happens.
> (if you shrink by coping, then it could be technically on-line but it had
> better not be written to).
> 

Wow! Thank you so much everyone for your feedback, I am truly very grateful :) 

Before tackling this task I plan to delete some unnecessary files to have less to backup, then make the all so important backup, and lastly attempt the migration. I had to remember how I decided to build up the LVM on the RAID 5 array :)


Model: Linux Software RAID Array (md)
Disk /dev/md0: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  2000GB  2000GB  primary               lvm

OR

 --- Physical volume ---
  PV Name               /dev/md0p1
  VG Name               masterVG
  PV Size               1.82 TiB / not usable 3.00 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              476932
  Free PE               261031
  Allocated PE          215901
  PV UUID               xiS8is-RR6D-Swre-IHQN-yGY2-cNmJ-wGGBY7

AND

  --- Volume group ---
  VG Name               masterVG
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1.82 TiB
  PE Size               4.00 MiB
  Total PE              476932
  Alloc PE / Size       215901 / 843.36 GiB
  Free  PE / Size       261031 / 1019.65 GiB
  VG UUID               eoZgIp-50Wb-Lrhg-Sawt-rWDV-YIDy-Ez2Glr



So it looks like the entire RAID 5 array is one LVM physical volume and then one volume group.


  --- Logical volume ---
  LV Name                /dev/masterVG/backupLV
  VG Name                masterVG
  LV UUID                wc61ER-uoNn-ynXI-2v64-wpa8-ON3g-im4fo8
  LV Write Access        read/write
  LV Status              available
  # open                 2
  LV Size                700.00 GiB
  Current LE             179200
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     4096
  Block device           254:9


The logical volume has a size of 700.00 GB, so this is less than the 1 TB size in which I plan the newly migrated RAID 1 mdadm array to be (with two 1 TB drives). I don't think I will therefore have any need to shrink the ext4 filesystem, hopefully meaning I can complete the entire process over some time while keeping the data available or online.

I remember that I had good reasons for using LVM, but I will have to get reacquainted again with the commands of LVM like pv/vg/lv[move/reduce]...


Thanks again to everyone for their help :D




 		 	   		  --
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux