Re: [PATCH 4/5] multipathd: disable queueing for recreated map in uev_remove_map

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Martin:
    In history, the deadlock has happened. When I review patchs made by
huawei employees before me, I think this should be sent to upstream.
It is made in May 2018, so I don't know more details. I'm sorry for that.

On 2020/8/19 3:23, Martin Wilck wrote:
> Hi Lixiaokeng,
> 
> 
> A map which is removed and not yet re-added again (as far as udev is
> concerned) doesn't need to queue because it can't possibly be in use.
> So I think the patch can't hurt in other scenarios, at it makes sense
> in the situation you describe. However I have a few questions.
> 
> Have you observed this, or is it theory? I'm wondering: After 2) there
> should be some paths again, so why would the udev workers hang? 
> I guess this could happen if the regenerated paths all in failed /
> standby state, is that what you mean?
> 
> Note also that we set DM_NOSCAN in the udev rules when there are no
> usable paths, so udev workers would only hang if the last path fails /
> is removed after the "multipath -U" check.
> 
> You've certainly hit a weak spot here, and you've nicely described a
> potential problem scenario. The delayed processing of uevents that
> multipathd triggered itself is a recurring cause of headache.
> 
> Regards,
> Martin
> 
> 
> 

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel




[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux