Re: Replace Drive in RAID6

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Erm, s/lvchange --repair/lvconvert --repair/

On Tue, Nov 16, 2021 at 6:41 PM Heinz Mauelshagen <heinzm@xxxxxxxxxx> wrote:
On Mon, Nov 8, 2021 at 8:08 AM Adam Puleo <adam.puleo@xxxxxxxxxx> wrote:
Hello Everyone,

Hi,
for starters, which kernel/distro is this?

Also, all layout changes on RaidLVs require activated ones.

 

Since the sub-LV #0 has errored, LVM will not let me activate the logical volume.

Is there a way to remap the #0 sub-LV to the replaced disk or resizing the RAID6 to one less disk?

Each of the RAID6 SubLV pairs has an internal id and all data and parity (P+Q syndromes) have been stored in a rotating pattern  >  no to the remapping part.

Also no to the resize, as it'd need a fully operational radi6, thus repairing the RaidLV is needed.

As mentioned, '"lvchange --rebuild ..." is inadequate to repair RaidLVs with broken/lost PVs, "lvchange --repair $RaidLV" is.

In order to diagnose why your raid6 LV now fails to activate via "lvchange -ay --activationmode degraded $RaidLV" which is the proper way to go about it,
can you please describe any/all updating steps you took after the drive failed rendering your raid6 LV degraded?  Please don't change anything until you made
that transparent so that we keep chances to fix this... 

FYI:
"lvconvert --(repair|replace) ..." difference is the former repairing RaidLVs with failed PVs by allocating space on different, accessible PVs hence causing the RaidLV to become fully operational after rebuilding all missing block content by using parity stored on the remaining rimage SubLVs vs. the latter allowing to replace mappings to intact PVs by remapping the RAID SubLV pair to different ones (e.g. faster or less contended PVs).

Thanks,
Heinz

 

Thank you,
-Adam


On Nov 4, 2021, at 7:21 PM, Adam Puleo <adam.puleo@xxxxxxxxxx> wrote:

Hello Andreas,

After deactivating each of the individual rmage and rmeta volumes I receive this error:
# lvchange -a y --activationmode degraded vg_data/lv_data
 device-mapper: reload ioctl on  (253:12) failed: Invalid argument

In messages I see the following errors:
Nov  4 19:19:43 nas kernel: device-mapper: raid: Failed to read superblock of device at position 0
Nov  4 19:19:43 nas kernel: device-mapper: raid: New device injected into existing raid set without 'delta_disks' or 'rebuild' parameter specified
Nov  4 19:19:43 nas kernel: device-mapper: table: 253:12: raid: Unable to assemble array: Invalid superblocks
Nov  4 19:19:43 nas kernel: device-mapper: ioctl: error adding target to table

Am I not adding the new drive to the RAID correctly? I first did a pvcreate and then a vgextend.

I was using the —rebuild option because I know which physical drive is bad. In the lvmraid man page it says —repair might not know which is the correct block to use so to use —rebuild.

Thank you,
-Adam



On Nov 3, 2021, at 7:25 AM, Andreas Schrägle <linux-lvm@xxxxxxxxx> wrote:

On Tue, 2 Nov 2021 22:56:18 -0700
Adam Puleo <adam.puleo@xxxxxxxxxx> wrote:

> Hello,
>
> One of my drives failed in my RAID6 and I’m trying to replace it without success.
>
> I’m trying to rebuild the failed drive (/dev/sda): lvchange --rebuild /dev/sda vg_data
>
> But I’m receiving the error: vg_data/lv_data must be active to perform this operation.
>
> I have tried to activate the logical volume without success.
>
> How do I go about activating the volume so that I can rebuild the failed drive?
>
> Thanks,
> -Adam
>
> # lvs -a -o name,segtype,devices
> LV                       Type   Devices                                                                                           
> lv_data                  raid6  lv_data_rimage_0(0),lv_data_rimage_1(0),lv_data_rimage_2(0),lv_data_rimage_3(0),lv_data_rimage_4(0)
> [lv_data_rimage_0]       error                                                                                                     
> [lv_data_rimage_1]       linear /dev/sdc1(1)                                                                                       
> [lv_data_rimage_2]       linear /dev/sdb1(1)                                                                                       
> [lv_data_rimage_3]       linear /dev/sdf1(1)                                                                                       
> [lv_data_rimage_4]       linear /dev/sde1(2)                                                                                       
> [lv_data_rmeta_0]        error                                                                                                     
> [lv_data_rmeta_1]        linear /dev/sdc1(0)                                                                                       
> [lv_data_rmeta_2]        linear /dev/sdb1(0)                                                                                       
> [lv_data_rmeta_3]        linear /dev/sdf1(0)                                                                                       
> [lv_data_rmeta_4]        linear /dev/sde1(0)                                                                                       
>
> # lvs -a
> LV                       VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
> lv_data                  vg_data       rwi---r--- 990.00g                                                   
> [lv_data_rimage_0]       vg_data       vwi-a-r-r- 330.00g                                                   
> [lv_data_rimage_1]       vg_data       Iwi-a-r-r- 330.00g                                                   
> [lv_data_rimage_2]       vg_data       Iwi-a-r-r- 330.00g                                                   
> [lv_data_rimage_3]       vg_data       Iwi-a-r-r- 330.00g                                                   
> [lv_data_rimage_4]       vg_data       Iwi-a-r-r- 330.00g                                                   
> [lv_data_rmeta_0]        vg_data       ewi-a-r-r-   4.00m                                                   
> [lv_data_rmeta_1]        vg_data       ewi-a-r-r-   4.00m                                                   
> [lv_data_rmeta_2]        vg_data       ewi-a-r-r-   4.00m                                                   
> [lv_data_rmeta_3]        vg_data       ewi-a-r-r-   4.00m                                                   
> [lv_data_rmeta_4]        vg_data       ewi-a-r-r-   4.00m                                                   
>
>
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@xxxxxxxxxx
> https://listman.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Hello Adam,

how exactly have you tried to activate the LV so far?

lvchange with --activationmode degraded should work, no?

Also, are you sure that --rebuild is the correct operation?

man 7 lvmraid suggest you might want --repair or --replace instead.

Best Regards


_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux