On Thu, 2020-10-01 at 16:44 +0200, Frederic Weisbecker wrote: > External Email > > ------------------------------------------------------------------- > --- > On Wed, Jul 22, 2020 at 02:57:33PM +0000, Alex Belits wrote: > > From: Yuri Norov <ynorov@xxxxxxxxxxx> > > > > For nohz_full CPUs the desirable behavior is to receive interrupts > > generated by tick_nohz_full_kick_cpu(). But for hard isolation it's > > obviously not desirable because it breaks isolation. > > > > This patch adds check for it. > > > > Signed-off-by: Yuri Norov <ynorov@xxxxxxxxxxx> > > [abelits@xxxxxxxxxxx: updated, only exclude CPUs running isolated > > tasks] > > Signed-off-by: Alex Belits <abelits@xxxxxxxxxxx> > > --- > > kernel/time/tick-sched.c | 4 +++- > > 1 file changed, 3 insertions(+), 1 deletion(-) > > > > diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c > > index 6e4cd8459f05..2f82a6daf8fc 100644 > > --- a/kernel/time/tick-sched.c > > +++ b/kernel/time/tick-sched.c > > @@ -20,6 +20,7 @@ > > #include <linux/sched/clock.h> > > #include <linux/sched/stat.h> > > #include <linux/sched/nohz.h> > > +#include <linux/isolation.h> > > #include <linux/module.h> > > #include <linux/irq_work.h> > > #include <linux/posix-timers.h> > > @@ -268,7 +269,8 @@ static void tick_nohz_full_kick(void) > > */ > > void tick_nohz_full_kick_cpu(int cpu) > > { > > - if (!tick_nohz_full_cpu(cpu)) > > + smp_rmb(); > > What is it ordering? ll_isol_flags will be read in task_isolation_on_cpu(), that accrss should be ordered against writing in task_isolation_kernel_enter(), fast_task_isolation_cpu_cleanup() and task_isolation_start(). Since task_isolation_on_cpu() is often called for multiple CPUs in a sequence, it would be wasteful to include a barrier inside it. > > + if (!tick_nohz_full_cpu(cpu) || task_isolation_on_cpu(cpu)) > > return; > > You can't simply ignore an IPI. There is always a reason for a > nohz_full CPU > to be kicked. Something triggered a tick dependency. It can be posix > cpu timers > for example, or anything. I realize that this is unusual, however the idea is that while the task is running in isolated mode in userspace, we assume that from this CPUs point of view whatever is happening in kernel, can wait until CPU is back in kernel, and when it first enters kernel from this mode, it should "catch up" with everything that happened in its absence. task_isolation_kernel_enter() is supposed to do that, so by the time anything should be done involving the rest of the kernel, CPU is back to normal. It is application's responsibility to avoid triggering things that break its isolation, so the application assumes that everything that involves entering kernel will not be available while it is isolated. If isolation will be broken, or application will request return from isolation, everything will go back to normal environment with all functionality available. > > > > irq_work_queue_on(&per_cpu(nohz_full_kick_work, cpu), cpu); > > -- > > 2.26.2 > >