On Fri, Jul 05, 2019 at 05:48:31PM +0200, Dmitry Vyukov wrote: > On Fri, Jul 5, 2019 at 5:17 PM Paul E. McKenney <paulmck@xxxxxxxxxxxxx> wrote: > > > > On Fri, Jul 05, 2019 at 03:24:26PM +0200, Dmitry Vyukov wrote: > > > On Thu, Jun 27, 2019 at 12:47 AM Theodore Ts'o <tytso@xxxxxxx> wrote: > > > > > > > > More details about what is going on. First, it requires root, because > > > > one of that is required is using sched_setattr (which is enough to > > > > shoot yourself in the foot): > > > > > > > > sched_setattr(0, {size=0, sched_policy=0x6 /* SCHED_??? */, sched_flags=0, sched_nice=0, sched_priority=0, sched_runtime=2251799813724439, sched_deadline=4611686018427453437, sched_period=0}, 0) = 0 > > > > > > > > This is setting the scheduler policy to be SCHED_DEADLINE, with a > > > > runtime parameter of 2251799.813724439 seconds (or 26 days) and a > > > > deadline of 4611686018.427453437 seconds (or 146 *years*). This means > > > > a particular kernel thread can run for up to 26 **days** before it is > > > > scheduled away, and if a kernel reads gets woken up or sent a signal, > > > > no worries, it will wake up roughly seven times the interval that Rip > > > > Van Winkle spent snoozing in a cave in the Catskill Mountains (in > > > > Washington Irving's short story). > > > > > > > > We then kick off a half-dozen threads all running: > > > > > > > > sendfile(fd, fd, &pos, 0x8080fffffffe); > > > > > > > > (and since count is a ridiculously large number, this gets cut down to): > > > > > > > > sendfile(fd, fd, &pos, 2147479552); > > > > > > > > Is it any wonder that we are seeing RCU stalls? :-) > > > > > > +Peter, Ingo for sched_setattr and +Paul for rcu > > > > > > First of all: is it a semi-intended result of a root (CAP_SYS_NICE) > > > doing local DoS abusing sched_setattr? It would perfectly reasonable > > > to starve other processes, but I am not sure about rcu. In the end the > > > high prio process can use rcu itself, and then it will simply blow > > > system memory by stalling rcu. So it seems that rcu stalls should not > > > happen as a result of weird sched_setattr values. If that is the case, > > > what needs to be fixed? sched_setattr? rcu? sendfile? > > > > Does the (untested, probably does not even build) patch shown below help? > > This patch assumes that the kernel was built with CONFIG_PREEMPT=n. > > And that I found all the tight loops on the do_sendfile() code path. > > The config used when this happened is referenced from here: > https://syzkaller.appspot.com/bug?extid=4bfbbf28a2e50ab07368 > and it contains: > CONFIG_PREEMPT=y > > So... what does this mean? The loop should have been preempted without > the cond_resched() then, right? Exactly, so although my patch might help for CONFIG_PREEMPT=n, it won't help in your scenario. But looking at the dmesg from your URL above, I see the following: rcu: rcu_preempt kthread starved for 10549 jiffies! g8969 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0 And, prior to that: rcu: All QSes seen, last rcu_preempt kthread activity 10503 (4295056736-4295046233), jiffies_till_next_fqs=1, root ->qsmask 0x0 In other words, the grace period has finished, but RCU's grace-period kthread hasn't gotten a chance to run, and thus hasn't marked it as completed. The standard workaround is to set the rcutree.kthread_prio kernel boot parameter to a comfortably high real-time priority. At least assuming that syzkaller isn't setting the scheduling priority of random CPU-bound tasks to RT priority 99 or some such. ;-) Does that work for you? Thanx, Paul > > > If this is semi-intended, the only option I see is to disable > > > something in syzkaller: sched_setattr entirely, or drop CAP_SYS_NICE, > > > or ...? Any preference either way? > > > > Long-running tight loops in the kernel really should contain > > cond_resched() or better. > > > > Thanx, Paul > > > > ------------------------------------------------------------------------ > > > > diff --git a/fs/splice.c b/fs/splice.c > > index 25212dcca2df..50aa3286764a 100644 > > --- a/fs/splice.c > > +++ b/fs/splice.c > > @@ -985,6 +985,7 @@ ssize_t splice_direct_to_actor(struct file *in, struct splice_desc *sd, > > sd->pos = prev_pos + ret; > > goto out_release; > > } > > + cond_resched(); > > } > > > > done: > > >