On Sat, Jul 06, 2019 at 12:28:01AM -0400, Theodore Ts'o wrote: > On Fri, Jul 05, 2019 at 12:10:55PM -0700, Paul E. McKenney wrote: > > > > Exactly, so although my patch might help for CONFIG_PREEMPT=n, it won't > > help in your scenario. But looking at the dmesg from your URL above, > > I see the following: > > I just tested with CONFIG_PREEMPT=n > > % grep CONFIG_PREEMPT /build/ext4-64/.config > CONFIG_PREEMPT_NONE=y > # CONFIG_PREEMPT_VOLUNTARY is not set > # CONFIG_PREEMPT is not set > CONFIG_PREEMPT_COUNT=y > CONFIG_PREEMPTIRQ_TRACEPOINTS=y > # CONFIG_PREEMPTIRQ_EVENTS is not set > > And with your patch, it's still not helping. > > I think that's because SCHED_DEADLINE is a real-time style scheduler: > > In order to fulfill the guarantees that are made when a thread is ad‐ > mitted to the SCHED_DEADLINE policy, SCHED_DEADLINE threads are the > highest priority (user controllable) threads in the system; if any > SCHED_DEADLINE thread is runnable, it will preempt any thread scheduled > under one of the other policies. > > So a SCHED_DEADLINE process is not going yield control of the CPU, > even if it calls cond_resched() until the thread has run for more than > the sched_runtime parameter --- which for the syzkaller repro, was set > at 26 days. > > There are some safety checks when using SCHED_DEADLINE: > > The kernel requires that: > > sched_runtime <= sched_deadline <= sched_period > > In addition, under the current implementation, all of the parameter > values must be at least 1024 (i.e., just over one microsecond, which is > the resolution of the implementation), and less than 2^63. If any of > these checks fails, sched_setattr(2) fails with the error EINVAL. > > The CBS guarantees non-interference between tasks, by throttling > threads that attempt to over-run their specified Runtime. > > To ensure deadline scheduling guarantees, the kernel must prevent situ‐ > ations where the set of SCHED_DEADLINE threads is not feasible (schedu‐ > lable) within the given constraints. The kernel thus performs an ad‐ > mittance test when setting or changing SCHED_DEADLINE policy and at‐ > tributes. This admission test calculates whether the change is feasi‐ > ble; if it is not, sched_setattr(2) fails with the error EBUSY. > > The problem is that SCHED_DEADLINE is designed for sporadic tasks: > > A sporadic task is one that has a sequence of jobs, where each job is > activated at most once per period. Each job also has a relative dead‐ > line, before which it should finish execution, and a computation time, > which is the CPU time necessary for executing the job. The moment when > a task wakes up because a new job has to be executed is called the ar‐ > rival time (also referred to as the request time or release time). The > start time is the time at which a task starts its execution. The abso‐ > lute deadline is thus obtained by adding the relative deadline to the > arrival time. > > It appears that kernel's admission control before allowing > SCHED_DEADLINE to be set on a thread was designed for sane > applications, and not abusive ones. Given that process started doing > abusive things *after* SCHED_DEADLINE policy was set, in order kernel > to figure out that in fact SCHED_DEADLINE should be denied for any > arbitrary kernel thread would require either (a) solving the halting > problem, or (b) being able to anticipate the future (in which case, > we should be using that kernel algorithm to play the stock market :-) 26 days will definitely get you a large collection of RCU CPU stall warnings! Thank you for digging into this, Ted. I suppose RCU could take the dueling-banjos approach and use increasingly aggressive scheduler policies itself, up to and including SCHED_DEADLINE, until it started getting decent forward progress. However, that sounds like the something that just might have unintended consequences, particularly if other kernel subsystems were to also play similar games of dueling banjos. Alternatively, is it possible to provide stricter admission control? For example, what sorts of policies do SCHED_DEADLINE users actually use? Thanx, Paul