Re: [RFC][PATCH v2 5/5] sched: User Mode Concurency Groups

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 21, 2022 at 04:57:29PM +0000, Mark Rutland wrote:
> On Thu, Jan 20, 2022 at 04:55:22PM +0100, Peter Zijlstra wrote:
> > User Managed Concurrency Groups is an M:N threading toolkit that allows
> > constructing user space schedulers designed to efficiently manage
> > heterogeneous in-process workloads while maintaining high CPU
> > utilization (95%+).
> > 
> > XXX moar changelog explaining how this is moar awesome than
> > traditional user-space threading.
> 
> Awaiting a commit message that I can parse, I'm just looking at the entry bits
> for now. TBH I have no idea what this is actually trying to do...

Ha! yes.. I knew I was going to have to do that eventually :-)

It's basically a user-space scheduler that is subservient to the kernel
scheduler (hierarchical scheduling were a user task is a server for
other user tasks), where a server thread is in charge of selecting which
of it's worker threads gets to run. The original idea was that each
server only ever runs a single worker, but PeterO is currently trying to
reconsider that.

The *big* feature here, over traditional N:M scheduling, is that threads
can block, while traditional userspace threading is limited to
non-blocking system calls (and per later, page-faults).

In order to make that happen we must ovbiously hook schedule() for
these worker threads and inform userspace (the server thread) when this
happens such that it can select another worker thread to go vroom.

Meanwhile, a worker task getting woken from schedule() must not continue
running; instead it must enter the server's ready-queue and await it's
turn again. Instead of dealing with arbitrary delays deep inside the
kernel block chain, we punt and let the task complete until
return-to-user and block it there. The time between schedule() and
return-to-user is unmanaged time.

Now, since we can't readily poke at userspace memory from schedule(), we
could be holding mmap_sem etc., we pin the worker and server page on
sys-enter such that when we hit schedule() we can update state and then
unpin the pages such that page pin time is from sys-enter to first
schedule(), or sys-exit which ever comes first. This ensures the
page-pin is *short* term.

Additionally we must deal with signals :-(, the currnt approach is to
let them bust boundaries and run them as unmanaged time. UMCG userspace
can obviously control this by using pthread_sigmask() and friends.

Now, the reason for irqentry_irq_enable() is mostly #PF.  When a worker
faults and blocks we want the same things to happen.

Anyway, so workers have 3 layers of hooks:

		sys_enter
				schedule()
		sys_exit

	return-to-user

There's a bunch of paths through this:

 - sys_enter -> sys_exit:

	no blocking; nothing changes:
	  - sys_enter:
	    * pin pages

	  - sys_exit:
	    * unpin pages

 - sys_enter -> schedule() -> sys_exit:

	we did block:
	  - sys_enter:
	    * pin pages

	  - schedule():
	    * mark worker BLOCKED
	    * wake server (it will observe it's current worker !RUNNING
	      and select a new worker or idles)
	    * unpin pages

	  - sys_exit():
	    * mark worker RUNNABLE
	    * enqueue worker on server's runnable_list
	    * wake server (which will observe a new runnable task, add
	      it to whatever and if it was idle goes run, otherwise goes
	      back to sleep to let it's current worker finish)
	    * block until RUNNING

 - sys_enter -> schedule() -> sys_exit -> return_to_user:

	As above; except now we got a signal while !RUNNING. sys_exit()
	terminates and return-to-user takes over running the signal and
	on return from the signal we'll again block until RUNNING, or do
	the whole signal dance again if so required.


Does this clarify things a little?



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux