On Wed, Nov 18, 2020 at 08:48:43PM +0100, Thomas Gleixner wrote: > @@ -4073,6 +4089,7 @@ prepare_task_switch(struct rq *rq, struc > perf_event_task_sched_out(prev, next); > rseq_preempt(prev); > fire_sched_out_preempt_notifiers(prev, next); > + kmap_local_sched_out(); > prepare_task(next); > prepare_arch_switch(next); > } > @@ -4139,6 +4156,7 @@ static struct rq *finish_task_switch(str > finish_lock_switch(rq); > finish_arch_post_lock_switch(); > kcov_finish_switch(current); > + kmap_local_sched_in(); This is asymmetric and deserves a comment. You do the sched_out with IRQs disabled and rq->lock held, but do the sched_in with IRQs enabled and rq->lock released. I suppose doing it here reduces IRQ latency by however long it takes to update and invalidate that handful of pages, is that worth the asymmetry? It mirrors preempt_notifiers I suppose, and they actually rely on this asymmetry for something IIRC. > fire_sched_in_preempt_notifiers(current); > /*