Hi Richard, thank you for the review and sorry for my late reply to it. On Thu, Oct 05, 2023 at 10:46:41PM +0200, Richard Weinberger wrote: > ----- Ursprüngliche Mail ----- > > Von: "richard" <richard@xxxxxx> > > ----- Ursprüngliche Mail ----- > >> diff --git a/drivers/mtd/ubi/block.c b/drivers/mtd/ubi/block.c > >> index e0618bbde3613..99b5f502c9dbc 100644 > >> --- a/drivers/mtd/ubi/block.c > >> +++ b/drivers/mtd/ubi/block.c > >> @@ -470,7 +470,7 @@ int ubiblock_remove(struct ubi_volume_info *vi, bool force) > >> } > >> > >> /* Found a device, let's lock it so we can check if it's busy */ > >> - mutex_lock(&dev->dev_mutex); > >> + mutex_lock_nested(&dev->dev_mutex, SINGLE_DEPTH_NESTING); > > > > The usage of mutex_lock_nested() in this patch looks fishy. > > Can you please elaborate a bit more why all these mutexes can be taken twice? > > (Any why not more often). > > I think I figured myself. > ubiblock_ops->open() and ->release() are both called with disk->open_mutex held. > ubiblock_open() and ubiblock_release() take dev->dev_mutex. > So, the locking order is open_mutex, followed by dev_mutex. > > On the other hand, ubiblock_remove() is called via UBI notify. > It takes first dev_mutex and then calls del_gendisk() which will trigger ubiblock_ops->release() > under disk->open_mutex but takes dev_mutex again. > So, we this not only takes a lock twice but also in reverse order. > mutex_lock_nested() might silence lockdep but I'm not sure whether this is safe at all. I agree that the locking here is not trivial. Taking the same lock twice is that SINGLE_DEPTH_NESTING is intended for and resolves running into a deadlock otherwise. I'm not sure if the changed order of acquiring dev_mutex and open_mutex present a problem. The detach-by-notification case is anyway quite exotic and I only see quite synthetic paths to actually get there (such as hotplug removal of the SPI controller providing access to SPI-NAND flash, which is how I was testing this...). To make things easier and more aligned with the original goal (being to attach a UBI device from DT, early enough to allow its use as NVMEM provider by other drivers) I suggest to for now only handle attachment by MTD notification (if linux,ubi compatible is present in DT) and keep handling detachment the way it is now. This will (just like it is already now) result in "zombie" UBI devices being present after their MTD device has already been removed. Accessing them creates loud kernel warnings at best. Let me know if keeing detachment the way it is and only implementing attaching from MTD notification and device tree compatible would be ok with you. Thank you! Best regards Daniel