Re: [patch RT 3/7] Disable RT_GROUP_SCHED in PREEMPT_RT_FULL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2012-07-11 at 22:05 +0000, Thomas Gleixner wrote:
> plain text document attachment
> (disable-rt_group_sched-in-preempt_rt_full.patch)
> Strange CPU stalls have been observed in RT when RT_GROUP_SCHED
> was configured.
> 
> Disable it for now.
> 
> Signed-off-by: Carsten Emde <C.Emde@xxxxxxxxx>
> Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
> 
> ---
>  init/Kconfig |    1 +
>  1 file changed, 1 insertion(+)
> 
> Index: linux-3.4.4-rt13-64+/init/Kconfig
> ===================================================================
> --- linux-3.4.4-rt13-64+.orig/init/Kconfig
> +++ linux-3.4.4-rt13-64+/init/Kconfig
> @@ -746,6 +746,7 @@ config RT_GROUP_SCHED
>  	bool "Group scheduling for SCHED_RR/FIFO"
>  	depends on EXPERIMENTAL
>  	depends on CGROUP_SCHED
> +	depends on !PREEMPT_RT_FULL
>  	default n
>  	help
>  	  This feature lets you explicitly allocate real CPU bandwidth
> 
> 
> 
> 
> 

I turn the thing off because it doesn't make any sense to me for -rt,
and because it's busted.  The below works around isolation bustage I
encountered.  Peter didn't like it (what's to like?) but it saves the
day, so shall live on in non-rt kernels until I hopefully someday see
RT_GROUP_SCHED being fed into a Bitwolf-9000 ;-)

sched,rt: fix isolated CPUs leaving root_task_group indefinitely throttled

Root task group bandwidth replentishment must service all CPUs regardless of
where it was last started.

Signed-off-by: Mike Galbraith <efault@xxxxxx>
---
 kernel/sched/rt.c |   13 +++++++++++++
 1 file changed, 13 insertions(+)

--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -782,6 +782,19 @@ static int do_sched_rt_period_timer(stru
 	const struct cpumask *span;
 
 	span = sched_rt_period_mask();
+#ifdef CONFIG_RT_GROUP_SCHED
+	/*
+	 * FIXME: isolated CPUs should really leave the root task group,
+	 * whether they are isolcpus or were isolated via cpusets, lest
+	 * the timer run on a CPU which does not service all runqueues,
+	 * potentially leaving other CPUs indefinitely throttled.  If
+	 * isolation is really required, the user will turn the throttle
+	 * off to kill the perturbations it causes anyway.  Meanwhile,
+	 * this maintains functionallity for boot and/or troubleshooting.
+	 */
+	if (rt_b == &root_task_group.rt_bandwidth)
+		span = cpu_online_mask;
+#endif
 	for_each_cpu(i, span) {
 		int enqueue = 0;
 		struct rt_rq *rt_rq = sched_rt_period_rt_rq(rt_b, i);


--
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [RT Stable]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux