On Thu, 9 May 2024 21:09:12 +0800 Yu Kuai <yukuai1@xxxxxxxxxxxxxxx> wrote: > > static DECLARE_WAIT_QUEUE_HEAD(md_event_waiters); > > static atomic_t md_event_count; > > -void md_new_event(void) > > +void md_new_event(struct mddev *mddev) > > { > > atomic_inc(&md_event_count); > > wake_up(&md_event_waiters); > > + schedule_work(&mddev->uevent_work); > > This doesn't look correct, if uevent_work is queued already and not > running yet, this will do nothing, which means uevent will be missing. > Looks like you want to generate as much uevent as md_new_event() is > called, or you maybe don't care about this? I think we don't need to care. I meant that we don't need perfect mechanism. If there will be two events queued then probably userspace will be busy handling first one so the second one will be unnoticed anyway. Perhaps, we should left comment? > > By the way, since we're here, will it be better to add some uevent > description in detail as well? for example: > > After add a new disk by mdadm, use kobject_uevent_env() and pass in > additional string like "add <disk> to array/conf". > > And many other events. > Could you please elaborate more? I don't understand why this comment is connected to new disk, potentially CHANGE uevent can be generated in more scenarios, like rebuild/resync/reshape end, level change, starting raid array, stopping raid array (I just briefly looked where md_new_event() is called). I think that it is correct because those changes are real CHANGE of mddevice state. Also, there is a comment below md_new_event(): * Events are: * start array, stop array, error, add device, remove device, * start build, activate spare Thanks, Mariusz