Re: [QUESTION] How to fix the race of "mdadm --add" and "mdadm mdadm --incremental --export"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2023-03-14 at 22:58 +0800, Li Xiao Keng wrote:
> Hi,
>    Here we meet a question. When we add a new disk to a raid, it may
> return
> -EBUSY.
>    The main process of --add(for example md0, sdf):
>        1.dev_open(sdf)
>        2.add_to_super
>        3.write_init_super
>        4.fsync(fd)
>        5.close(fd)
>        6.ioctl(ADD_NEW_DISK).
>    However, there will be some udev(change of sdf) event after step5.
> Then
> "/usr/sbin/mdadm --incremental --export $devnode --offroot
> $env{DEVLINKS}"
> will be run, and the sdf will be added to md0. After that, step6 will
> return
> -EBUSY.
>    It is a problem to user. First time adding disk does not return
> success
> but disk is actually added. And I have no good idea to deal with it.
> Please
> give some great advice.

I haven't looked at the code in detail, but off the top of my head, it
should help to execute step 5 after step 6. The close() in step 5
triggers the uevent via inotify; doing it after the ioctl should avoid
the above problem.

Another obvious workaround in mdadm would be to check the state of the
array in the EBUSY case and find out that the disk had already been
added.

But again, this was just a high-level guess.

Martin





[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux