Re: [patch V4 4/8] sched: Make migrate_disable/enable() independent of RT

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 18, 2020 at 08:48:42PM +0100, Thomas Gleixner wrote:
> From: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
> 
> Now that the scheduler can deal with migrate disable properly, there is no
> real compelling reason to make it only available for RT.
> 
> There are quite some code pathes which needlessly disable preemption in
> order to prevent migration and some constructs like kmap_atomic() enforce
> it implicitly.
> 
> Making it available independent of RT allows to provide a preemptible
> variant of kmap_atomic() and makes the code more consistent in general.
> 
> FIXME: Rework the comment in preempt.h - Peter?
> 

I didn't keep up to date and there is clearly a dependency on patches in
tip for migrate_enable/migrate_disable . It's not 100% clear to me what
reworking you're asking for but then again, I'm not Peter!

From tip;

/**
 * migrate_disable - Prevent migration of the current task
 *
 * Maps to preempt_disable() which also disables preemption. Use
 * migrate_disable() to annotate that the intent is to prevent migration,
 * but not necessarily preemption.
 *
 * Can be invoked nested like preempt_disable() and needs the corresponding
 * number of migrate_enable() invocations.
 */

I assume that the rework is to document the distinction between
migrate_disable and preempt_disable() because it may not be clear to some
people why one should be used over another and the risk of cut&paste
cargo cult programming.

So I assume the rework is for the middle paragraph

 * Maps to preempt_disable() which also disables preemption. Use
 * migrate_disable() to annotate that the intent is to prevent migration,
 * but not necessarily preemption. The distinction is that preemption
 * disabling will protect a per-cpu structure from concurrent
 * modifications due to preemption. migrate_disable partially protects
 * the tasks address space and potentially preserves the TLB entries
 * even if preempted such as an needed for a local IO mapping or a
 * kmap_atomic() referenced by on-stack pointers to avoid interference
 * between user threads or kernel threads sharing the same address space.

I know it can have other examples that are rt-specific and some tricks on
percpu page alloc draining that relies on a combination of migrate_disable
and interrupt disabling to protect the structures but the above example
might be understandable to a non-RT audience.

-- 
Mel Gorman
SUSE Labs

Attachment: signature.asc
Description: Digital signature


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux