On 02/12/19 10:42, Longpeng (Mike) wrote: >> cond_resched in vfio_iommu_map. Perhaps you could add one to >> vfio_pin_pages_remote and/or use vfio_pgsize_bitmap to cap the >> number of pages that it returns. > Um ... There's only one running task (qemu-kvm of the VM1) on that > CPU, so maybe the cond_resched() is ineffective ? Note that synchronize_sched() these days is just a synonym of synchronize_rcu, so this makes me wonder if you're running on an older kernel and whether you are missing this commit: commit 92aa39e9dc77481b90cbef25e547d66cab901496 Author: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx> Date: Mon Jul 9 13:47:30 2018 -0700 rcu: Make need_resched() respond to urgent RCU-QS needs The per-CPU rcu_dynticks.rcu_urgent_qs variable communicates an urgent need for an RCU quiescent state from the force-quiescent-state processing within the grace-period kthread to context switches and to cond_resched(). Unfortunately, such urgent needs are not communicated to need_resched(), which is sometimes used to decide when to invoke cond_resched(), for but one example, within the KVM vcpu_run() function. As of v4.15, this can result in synchronize_sched() being delayed by up to ten seconds, which can be problematic, to say nothing of annoying. This commit therefore checks rcu_dynticks.rcu_urgent_qs from within rcu_check_callbacks(), which is invoked from the scheduling-clock interrupt handler. If the current task is not an idle task and is not executing in usermode, a context switch is forced, and either way, the rcu_dynticks.rcu_urgent_qs variable is set to false. If the current task is an idle task, then RCU's dyntick-idle code will detect the quiescent state, so no further action is required. Similarly, if the task is executing in usermode, other code in rcu_check_callbacks() and its called functions will report the corresponding quiescent state. Reported-by: Marius Hillenbrand <mhillenb@xxxxxxxxx> Reported-by: David Woodhouse <dwmw2@xxxxxxxxxxxxx> Suggested-by: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx> Thanks, Paolo