Re: [PATCH] mm: remove unintentional voluntary preemption in get_mmap_lock_carefully

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Aug 20, 2023 at 12:43:03PM +0200, Mateusz Guzik wrote:
> Found by checking off-CPU time during kernel build (like so:
> "offcputime-bpfcc -Ku"), sample backtrace:
>     finish_task_switch.isra.0
>     __schedule
>     __cond_resched
>     lock_mm_and_find_vma
>     do_user_addr_fault
>     exc_page_fault
>     asm_exc_page_fault
>     -                sh (4502)

Now I'm awake, this backtrace really surprises me.  Do we not check
need_resched on entry?  It seems terribly unlikely that need_resched
gets set between entry and getting to this point, so I guess we must
not.

I suggest the version of the patch which puts might_sleep() before the
mmap_read_trylock() is the right one to apply.  It's basically what
we've done forever, except that now we'll be rescheduling without the
mmap lock held, which just seems like an overall win.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux