Re: [EXT] Re: [PATCH v5 9/9] task_isolation: kick_all_cpus_sync: don't kick isolated cpus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 24, 2020 at 12:21:06AM +0100, Frederic Weisbecker wrote:
> On Mon, Nov 23, 2020 at 10:39:34PM +0000, Alex Belits wrote:
> > 
> > On Mon, 2020-11-23 at 23:29 +0100, Frederic Weisbecker wrote:
> > > External Email
> > > 
> > > -------------------------------------------------------------------
> > > ---
> > > On Mon, Nov 23, 2020 at 05:58:42PM +0000, Alex Belits wrote:
> > > > From: Yuri Norov <ynorov@xxxxxxxxxxx>
> > > > 
> > > > Make sure that kick_all_cpus_sync() does not call CPUs that are
> > > > running
> > > > isolated tasks.
> > > > 
> > > > Signed-off-by: Yuri Norov <ynorov@xxxxxxxxxxx>
> > > > [abelits@xxxxxxxxxxx: use safe task_isolation_cpumask()
> > > > implementation]
> > > > Signed-off-by: Alex Belits <abelits@xxxxxxxxxxx>
> > > > ---
> > > >  kernel/smp.c | 14 +++++++++++++-
> > > >  1 file changed, 13 insertions(+), 1 deletion(-)
> > > > 
> > > > diff --git a/kernel/smp.c b/kernel/smp.c
> > > > index 4d17501433be..b2faecf58ed0 100644
> > > > --- a/kernel/smp.c
> > > > +++ b/kernel/smp.c
> > > > @@ -932,9 +932,21 @@ static void do_nothing(void *unused)
> > > >   */
> > > >  void kick_all_cpus_sync(void)
> > > >  {
> > > > +	struct cpumask mask;
> > > > +
> > > >  	/* Make sure the change is visible before we kick the cpus */
> > > >  	smp_mb();
> > > > -	smp_call_function(do_nothing, NULL, 1);
> > > > +
> > > > +	preempt_disable();
> > > > +#ifdef CONFIG_TASK_ISOLATION
> > > > +	cpumask_clear(&mask);
> > > > +	task_isolation_cpumask(&mask);
> > > > +	cpumask_complement(&mask, &mask);
> > > > +#else
> > > > +	cpumask_setall(&mask);
> > > > +#endif
> > > > +	smp_call_function_many(&mask, do_nothing, NULL, 1);
> > > > +	preempt_enable();
> > > 
> > > Same comment about IPIs here.
> > 
> > This is different from timers. The original design was based on the
> > idea that every CPU should be able to enter kernel at any time and run
> > kernel code with no additional preparation. Then the only solution is
> > to always do full broadcast and require all CPUs to process it.
> > 
> > What I am trying to introduce is the idea of CPU that is not likely to
> > run kernel code any soon, and can afford to go through an additional
> > synchronization procedure on the next entry into kernel. The
> > synchronization is not skipped, it simply happens later, early in
> > kernel entry code.

Perhaps a bitmask of pending flushes makes more sense? 
static_key_enable IPIs is one of the users, but for its case it would 
be necessary to differentiate between in-kernel mode and out of kernel 
mode atomically (since i-cache flush must be performed if isolated CPU 
is in kernel mode).

> Ah I see, this is ordered that way:
> 
> ll_isol_flags = ISOLATED
> 
>          CPU 0                                CPU 1
>     ------------------                       -----------------
>                                             // kernel entry
>     data_to_sync = 1                        ll_isol_flags = ISOLATED_BROKEN
>     smp_mb()                                smp_mb()
>     if ll_isol_flags(CPU 1) == ISOLATED     READ data_to_sync
>          smp_call(CPU 1)

Since isolated mode with syscalls is a desired feature, having a
separate atomic with in_kernel_mode = 0/1 (that is set/cleared 
on kernel entry / kernel exit, while on TIF_TASK_ISOLATION), would be
necessary (and a similar race-free logic as above).

> You should document that, ie: explain why what you're doing is safe.
> 
> Also Beware though that the data to sync in question doesn't need to be visible
> in the entry code before task_isolation_kernel_enter(). You need to audit all
> the callers of kick_all_cpus_sync().

Cscope tag: flush_icache_range
   #   line  filename / context / line
   1     96  arch/arc/kernel/jump_label.c <<arch_jump_label_transform>>
             flush_icache_range(entry->code, entry->code + JUMP_LABEL_NOP_SIZE);

This case would be OK for delayed processing before kernel entry, as long as
no code before task_isolation_kernel_enter can be modified (which i am
not sure about).

But:

  36     28  arch/ia64/include/asm/cacheflush.h <<flush_icache_user_page>>
             flush_icache_range(_addr, _addr + (len)); \

Is less certain.

Alex do you recall if arch_jump_label_transform was the only offender or 
there were others as well? (suppose handling only the ones which matter
in production at the moment, and later fixing individual ones makes most
sense).







[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux