On Sat, 19 Dec 2020, 11:27 Mike Galbraith, <efault@xxxxxx> wrote: > > On Sat, 2020-12-19 at 11:20 +0100, Vitaly Wool wrote: > > Hi Mike, > > > > On Sat, Dec 19, 2020 at 11:12 AM Mike Galbraith <efault@xxxxxx> wrote: > > > > > > (mailer partially munged formatting? resend) > > > > > > mm/zswap: fix zswap_frontswap_load() vs zsmalloc::map/unmap() might_sleep() splat > > > > > > zsmalloc map/unmap methods use preemption disabling bit spinlocks. Take the > > > mutex outside of pool map/unmap methods in zswap_frontswap_load() as is done > > > in zswap_frontswap_store(). > > > > oh wait... So is zsmalloc taking a spin lock in its map callback and > > releasing it only in unmap? In this case, I would rather keep zswap as > > is, mark zsmalloc as RT unsafe and have zsmalloc maintainer fix it. > > The kernel that generated that splat was NOT an RT kernel, it was plain > master.today with a PREEMPT config. I see, thanks. I don't think it makes things better for zsmalloc though. From what I can see, the offending code is this: > /* From now on, migration cannot move the object */ > pin_tag(handle); Bit spinlock is taken in pin_tag(). I find the comment above somewhat misleading, why is it necessary to take a spinlock to prevent migration? I would guess an atomic flag should normally be enough. zswap is not broken here, it is zsmalloc that needs to be fixed. Best regards, Vitaly