Re: [RFC] tentative prctl task isolation interface

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Feb 01, 2021 at 10:48:18AM +0000, Christoph Lameter wrote:
> On Thu, 21 Jan 2021, Marcelo Tosatti wrote:
> 
> > Anyway, trying to improve Christoph's definition:
> >
> > F_ISOL_QUIESCE                -> flush any pending operations that might cause
> > 				 the CPU to be interrupted (ex: free's
> > 				 per-CPU queues, sync MM statistics
> > 				 counters, etc).
> >
> > F_ISOL_ISOLATE		      -> inform the kernel that userspace is
> > 				 entering isolated mode (see description
> > 				 below on "ISOLATION MODES").
> >
> > F_ISOL_UNISOLATE              -> inform the kernel that userspace is
> > 				 leaving isolated mode.
> >
> > F_ISOL_NOTIFY		      -> notification mode of isolation breakage
> > 				 modes.
> 
> Looks good to me.
> 
> 
> > Isolation modes:
> > ---------------
> >
> > There are two main types of isolation modes:
> >
> > - SOFT mode: does not prevent activities which might generate interruptions
> > (such as CPU hotplug).
> >
> > - HARD mode: prevents all blockable activities that might generate interruptions.
> > Administrators can override this via /sys.
> 
> 
> Yup.
> 
> >
> > Notifications:
> > -------------
> >
> > Notification mode of isolation breakage can be configured as follows:
> >
> > - None (default): No notification is performed by the kernel on isolation
> >   breakage.
> >
> > - Syslog: Isolation breakage is reported to syslog.
> 
> 
> - Abort with core dump
> 
> This is useful for debugging and for hard core bare metalers that never
> want any interrupts.
> 
> One particular issue are page faults.  One would have to prefault the
> binary executable functions in order to avoid "interruptions" through page
> faults. Are these proper interrutions of the code? Certainly major faults
> are but minor faults may be ok? Dunno.

mlockall man page:

Real-time processes that are using mlockall() to prevent delays on
page faults should reserve enough locked stack pages before entering
the time-critical section, so that no page fault can be caused
by function calls. This can be achieved by calling a function that
allocates a sufficiently large automatic variable (an array) and writes
to the memory occupied by this array in order to touch these stack
pages. This way, enough pages will be mapped for the stack and can be
locked into RAM. The dummy writes ensure that not even copy-on-write
page faults can occur in the critical section.

> In practice what I have often seen in such apps is that there is a "warm"
> up mode where all critical functions are executed, all important variables
> are touched and dummy I/Os are performed in order to populate the caches
> and prefault all the data.I guess one would run these without isolation
> first and then switch on some sort of isolation mode after warm up. So far
> I think most people relied on the timer interrupt etc etc to be turned off
> after a few secs of just running throught a polling loop without any OS
> activities.

Yep.

> > > I ended up implementing a manager/helper task that talks to tasks over a
> > > socket (when they are not isolated) and over ring buffers in shared memory
> > > (when they are isolated). While the current implementation is rather
> > > limited, the intention is to delegate to it everything that isolated task
> > > either can't do at all (like, writing logs) or that it would be cumbersome
> > > to implement (like monitoring the state of task, determining presence of
> > > deferred work after the task returned to userspace), etc.
> >
> > Interesting. Are you considering opensourcing such library? Seems like a
> > generic problem.
> 
> Well everyone swears on having the right implementation. The people I know
> would not do any thing with a socket in such situations. They would only
> use shared memory and direct access to I/O devices via SPDK and DPDK or
> the RDMA subsystem.
> 
> 
> > > > Blocking? The app should fail if any deferred actions are triggered as a
> > > > result of syscalls. It would give a warning with _WARN
> > >
> > > There are many supposedly innocent things, nowhere at the scale of CPU
> > > hotplug, that happen in a system and result in synchronization implemented
> > > as an IPI to every online CPU. We should consider them to be an ordinary
> > > occurrence, so there is a choice:
> > >
> > > 1. Ignore them completely and allow them in isolated mode. This will delay
> > > userspace with no indication and no isolation breaking.
> > >
> > > 2. Allow them, and notify userspace afterwards (through vdso or through
> > > userspace helper/manager over shared memory). This may be useful in those
> > > rare situations when the consequences of delay can be mitigated afterwards.
> > >
> > > 3. Make them break isolation, with userspace being notified normally (ex:
> > > with a signal in the current implementation). I guess, can be used if
> > > somehow most of the causes will be eliminated.
> > >
> > > 4. Prevent them from reaching the target CPU and make sure that whatever
> > > synchronization they are intended to cause, will happen when intended target
> > > CPU will enter to kernel later. Since we may have to synchronize things like
> > > code modification, some of this synchronization has to happen very early on
> > > kernel entry.
> 
> 
> Or move the actions to a different victim processor like done with rcu and
> vmstat etc etc.
> 
> > >
> > > I am most interested in (4), so this is what was implemented in my version
> > > of the patch (and currently I am trying to achieve completeness and, if
> > > possible, elegance of the implementation).
> >
> > Agree. (3) will be necessary as intermediate step. The proposed
> > improvement to Christoph's reply, in this thread, separates notification
> > and syscall blockage.
> 
> I guess the notification mode will take care of the way we handle these
> interruptions.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux