RE: [PATCH v2] Detail: show correct raid level when the array is inactive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,
one clarification:
Issue can be reproduced, just create container and observe system journal.
I mean that the seqfault cannot be reproduced directly, it comes in background,
during creation.

Sorry for being inaccurate.

Mariusz

-----Original Message-----
From: Tkaczyk, Mariusz <mariusz.tkaczyk@xxxxxxxxx> 
Sent: Tuesday, October 20, 2020 11:50 AM
To: Jes Sorensen <jes@xxxxxxxxxxxxxxxxxx>; Lidong Zhong <lidong.zhong@xxxxxxxx>
Cc: linux-raid@xxxxxxxxxxxxxxx
Subject: RE: [PATCH v2] Detail: show correct raid level when the array is inactive

Hello Lidong,
We are observing seqfault during IMSM raid creation caused by your patch.

Core was generated by `/sbin/mdadm --detail --export /dev/md127'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x000000000042516e in Detail (dev=0x7ffdbd6d1efc "/dev/md127", c=0x7ffdbd6d0710) at Detail.c:228
228                     str = map_num(pers, info->array.level);

The issue occurs during container or volume creation and cannot be reproduced manually.
In my opinion udev is racing with create process. Observed on RHEL 8.2 with upstream mdadm.
Could you look?

If you are lack of IMSM hardware please use IMSM_NO_PLATFORM environment variable.

Thanks,
Mariusz

-----Original Message-----
From: Jes Sorensen <jes@xxxxxxxxxxxxxxxxxx>
Sent: Wednesday, October 14, 2020 5:20 PM
To: Lidong Zhong <lidong.zhong@xxxxxxxx>
Cc: linux-raid@xxxxxxxxxxxxxxx
Subject: Re: [PATCH v2] Detail: show correct raid level when the array is inactive

On 9/13/20 10:52 PM, Lidong Zhong wrote:
> Sometimes the raid level in the output of `mdadm -D /dev/mdX` is 
> misleading when the array is in inactive state. Here is a testcase for 
> introduction.
> 1\ creating a raid1 device with two disks. Specify a different 
> hostname rather than the real one for later verfication.
> 
> node1:~ # mdadm --create /dev/md0 --homehost TESTARRAY -o -l 1 -n 2 
> /dev/sdb /dev/sdc 2\ remove one of the devices and reboot 3\ show the 
> detail of raid1 device
> 
> node1:~ # mdadm -D /dev/md127
> /dev/md127:
>         Version : 1.2
>      Raid Level : raid0
>   Total Devices : 1
>     Persistence : Superblock is persistent
>           State : inactive
> Working Devices : 1
> 
> You can see that the "Raid Level" in /dev/md127 is raid0 now.
> After step 2\ is done, the degraded raid1 device is recognized as a 
> "foreign" array in 64-md-raid-assembly.rules. And thus the timer to 
> activate the raid1 device is not triggered. The array level returned 
> from GET_ARRAY_INFO ioctl is 0. And the string shown for "Raid Level"
> is str = map_num(pers, array.level); And the definition of pers is 
> mapping_t pers[] = { { "linear", LEVEL_LINEAR}, { "raid0", 0}, { "0", 
> 0} ...
> So the misleading "raid0" is shown in this testcase.
> 
> Changelog:
> v1: don't show "Raid Level" when array is inactive
> Signed-off-by: Lidong Zhong <lidong.zhong@xxxxxxxx>
>

Applied!

Thanks,
Jes





[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux