On Fri, Dec 13, 2024 at 09:40:49AM -0500, Liam R. Howlett wrote: > * Christian Brauner <brauner@xxxxxxxxxx> [241209 08:47]: > > Hey, > > > > Ok, I wanted to give this another try as I'd really like to rely on the > > maple tree supporting ULONG_MAX when BITS_PER_LONG is 64 as it makes > > things a lot simpler overall. > > > > As Willy didn't want additional users relying on an external lock I made > > it so that we don't have to and can just use the mtree lock. > > > > However, I need an irq safe variant which is why I added support for > > this into the maple tree. > > > > This is pullable from > > https://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs.git work.pidfs.maple_tree > > > I've been meaning to respond to this thread. > > I believe the flag is to tell the internal code what lock to use. If > you look at mas_nomem(), there is a retry loop that will drop the lock > to allocate and retry the operation. That function needs to support the > flag and use the correct lock/unlock. > > The mas_lock()/mas_unlock() needs a mas_lock_irq()/mas_unlock_irq() > variant, which would be used in the correct context. Yeah, it does. Did you see the patch that is included in the series? I've replaced the macro with always inline functions that select the lock based on the flag: static __always_inline void mtree_lock(struct maple_tree *mt) { if (mt->ma_flags & MT_FLAGS_LOCK_IRQ) spin_lock_irq(&mt->ma_lock); else spin_lock(&mt->ma_lock); } static __always_inline void mtree_unlock(struct maple_tree *mt) { if (mt->ma_flags & MT_FLAGS_LOCK_IRQ) spin_unlock_irq(&mt->ma_lock); else spin_unlock(&mt->ma_lock); } Does that work for you?