* Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx> wrote: > On Fri, Jun 17, 2011 at 12:58:03AM +0200, Ingo Molnar wrote: > > > > * Andi Kleen <ak@xxxxxxxxxxxxxxx> wrote: > > > > > > There's a crazy solution for that: the idle thread could process > > > > RCU callbacks carefully, as if it was running user-space code. > > > > > > In Ben's kernel NFS server case the system may not be idle. > > > > An always-100%-busy NFS server is very unlikely, but even in the > > hypothetical case a kernel NFS server is really performing system > > calls from a kernel thread in essence. If it doesn't do it explicitly > > then its main loop can easily include a "check RCU callbacks" call. > > As long as they make sure to call it in a clean environment: no > locks held and so on. But I am a bit worried about the possibility > of someone forgetting to put one of these where it is needed -- it > would work just fine for most workloads, but could fail only for > rare workloads. Yeah, some sort of worst-case-tick mechanism would guarantee that we wont remain without RCU GC. > That said, invoking RCU core/callback processing from the scheduler > context certainly sounds like an interesting way to speed up grace > periods. It also moves whatever priority logic is needed closer to the scheduler that has to touch those data structures anyway. RCU, at least partially, is a scheduler driven garbage collector even today: beyond context switch quiescent states the main practical role of the per CPU timer tick itself is scheduling. So having it close to when we do context-switches anyway looks pretty natural - worth trying. It might not work out in practice, but at first sight it would simplify a few things i think. Thanks, Ingo -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>