On Tue, 14 Mar 2023 16:04:23 +0100 Martin Wilck <mwilck@xxxxxxxx> wrote: > On Tue, 2023-03-14 at 22:58 +0800, Li Xiao Keng wrote: > > Hi, > > Here we meet a question. When we add a new disk to a raid, it may > > return > > -EBUSY. > > The main process of --add(for example md0, sdf): > > 1.dev_open(sdf) > > 2.add_to_super > > 3.write_init_super > > 4.fsync(fd) > > 5.close(fd) > > 6.ioctl(ADD_NEW_DISK). > > However, there will be some udev(change of sdf) event after step5. > > Then > > "/usr/sbin/mdadm --incremental --export $devnode --offroot > > $env{DEVLINKS}" > > will be run, and the sdf will be added to md0. After that, step6 will > > return > > -EBUSY. > > It is a problem to user. First time adding disk does not return > > success > > but disk is actually added. And I have no good idea to deal with it. > > Please > > give some great advice. > > I haven't looked at the code in detail, but off the top of my head, it > should help to execute step 5 after step 6. The close() in step 5 > triggers the uevent via inotify; doing it after the ioctl should avoid > the above problem. Hi, That will result in EBUSY in everytime. mdadm will always handle descriptor and kernel will refuse to add the drive. > > Another obvious workaround in mdadm would be to check the state of the > array in the EBUSY case and find out that the disk had already been > added. > > But again, this was just a high-level guess. > > Martin > Hmm... I'm not a native expert but why we cannot write metadata after adding drive to array? Why kernel can't handle that? Ideally, we should lock device and block udev- I know that there is flock based API to do that but I'm not sure if flock() won't cause the same problem. There is also something like "udev-md-raid-creating.rules". Maybe we can reuse it? Thanks, Mariusz