On Mon, Feb 06, 2023 at 02:09:02PM +0100, Thomas Gleixner wrote: > On Wed, Jan 18 2023 at 14:10, Johan Hovold wrote: > > On Wed, Jan 18, 2023 at 02:05:29PM +0100, Thomas Gleixner wrote: > >> You can do this in a two step approach. > >> > >> 1) Add the new locking and ensure that the lock is held when > >> the functions are called > > > > But the new locking has nothing to do with these functions. They are > > added because they fix various races elsewhere. Adding lockdep > > assertions in unrelated function as part of those fixes doesn't really > > make much sense. > > Seriously? The point is that revmap_mutex is not protecting against any > of the races which you are trying to protect against. Ergo, any callsite > which does not hold the irqdomain mutex is part of the problem you are > trying to solve, no? The current locking using the revmap_mutex is indeed incomplete as it only serialises updates of the revmap structures themselves and specifically does not prevent two mappings for the same interrupt to be created. It basically just protects the integrity of the revmap tree. The association and disassocation fixes are not sufficient to address the mapping race, but those two changes do guarantee that all current paths that access the revmap structures hold the irq_domain_mutex. Once that has been established, there is no point in keeping the revmap_mutex around for the update path (lookups are still protected by RCU). But this change is separate from the race that the disassociation fix addressed (e.g. the racy updates of the domain mapcount) and it is also independent of the fix for the mapping race (which still exists after removing the revmap mutex with the current order of the patches). Therefore the dropping the revmap mutex belongs in its own patch and I placed it here after the disassociation fix to highlight that it is indeed independent of the later fixes for the mapping race. It can be moved after, but I still think the lockdep asserts belong in the patch that removes the revmap tree lock. Johan