Replacing failed disk in raid volume without hot spare

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,
i created a raid6 volume over 5 disks, with only those 5 in VG. Then I removed 1 disk, rebooted and added a new one. The question is how to replace the missing device with
the new one. All is fine if I have 6th disk in VG as a hot spare, then i can just use "lvconvert --repair".

Tried all of theese, nothing seems to work

[root@lvm ~]# lvconvert --repair test/raid6
  Couldn't find device with uuid jnGEwl-EVqs-yQ0M-rUxa-gLqD-fHNU-AJUKjl.
Attempt to replace failed RAID images (requires full device resync)? [y/n]: y
  Insufficient suitable allocatable extents for logical volume : 87 more required
  Failed to allocate replacement images for test/raid6
  Failed to replace faulty devices in test/raid6.

This doesn't work with or without /dev/sdg
[root@lvm ~]# lvconvert --replace raid6_rimage_0 test/raid6 /dev/sdg
  Couldn't find device with uuid jnGEwl-EVqs-yQ0M-rUxa-gLqD-fHNU-AJUKjl.
  Cannot change VG test while PVs are missing.
  Consider vgreduce --removemissing.

[root@lvm ~]# vgextend --restoremissing test /dev/sdg
  Couldn't find device with uuid jnGEwl-EVqs-yQ0M-rUxa-gLqD-fHNU-AJUKjl.
  WARNING: PV /dev/sdg not found in VG test
  No PV has been restored.


My lvm looks like this at the moment

[root@lvm ~]# pvs
  Couldn't find device with uuid jnGEwl-EVqs-yQ0M-rUxa-gLqD-fHNU-AJUKjl.
  PV             VG     Fmt  Attr PSize    PFree 
  /dev/sda2      vg_lvm lvm2 a--     7.51g      0
  /dev/sdc       test   lvm2 a--  1020.00m 672.00m
  /dev/sdd       test   lvm2 a--  1020.00m 672.00m
  /dev/sde       test   lvm2 a--  1020.00m 672.00m
  /dev/sdf       test   lvm2 a--  1020.00m 672.00m
  /dev/sdg              lvm2 a--     1.00g   1.00g
  unknown device test   lvm2 a-m  1020.00m 672.00m

[root@lvm ~]# lvs -a -o name,lv_attr,devices test
  Couldn't find device with uuid jnGEwl-EVqs-yQ0M-rUxa-gLqD-fHNU-AJUKjl.
  LV               Attr     Devices                                                                                 
  raid6            rwi---r- raid6_rimage_0(0),raid6_rimage_1(0),raid6_rimage_2(0),raid6_rimage_3(0),raid6_rimage_4(0)
  [raid6_rimage_0] Iwi---r- unknown device(1)                                                                       
  [raid6_rimage_1] Iwi---r- /dev/sdc(1)                                                                             
  [raid6_rimage_2] Iwi---r- /dev/sdd(1)                                                                             
  [raid6_rimage_3] Iwi---r- /dev/sde(1)                                                                             
  [raid6_rimage_4] Iwi---r- /dev/sdf(1)                                                                             
  [raid6_rmeta_0]  ewi---r- unknown device(0)                                                                       
  [raid6_rmeta_1]  ewi---r- /dev/sdc(0)                                                                             
  [raid6_rmeta_2]  ewi---r- /dev/sdd(0)                                                                             
  [raid6_rmeta_3]  ewi---r- /dev/sde(0)                                                                             
  [raid6_rmeta_4]  ewi---r- /dev/sdf(0)                                                                             

I also tried doing "vgreduce --removemissing --force" but it just destroys the partial raid volume.

-- 
Tomas Vanderka
TEMPEST a.s.
Galvaniho 17/B, 821 04 BRATISLAVA
Phone: +421 905 571 691


Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux