On Wed, Mar 15, 2023 at 12:21:48PM +0000, Joel Fernandes wrote: > On Fri, Mar 10, 2023 at 09:55:02AM +0100, Uladzislau Rezki wrote: > > On Thu, Mar 09, 2023 at 10:10:56PM +0000, Joel Fernandes wrote: > > > On Thu, Mar 09, 2023 at 01:57:42PM +0100, Uladzislau Rezki wrote: > > > [..] > > > > > > > > > See this commit: > > > > > > > > > > > > > > > > > > 3705b88db0d7cc ("rcu: Add a module parameter to force use of > > > > > > > > > expedited RCU primitives") > > > > > > > > > > > > > > > > > > Antti provided this commit precisely in order to allow Android > > > > > > > > > devices to expedite the boot process and to shut off the > > > > > > > > > expediting at a time of Android userspace's choosing. So Android > > > > > > > > > has been making this work for about ten years, which strikes me > > > > > > > > > as an adequate proof of concept. ;-) > > > > > > > > > > > > > > > > Thanks for the pointer. That's true. Looking at Android sources, I > > > > > > > > find that Android Mediatek devices at least are setting > > > > > > > > rcu_expedited to 1 at late stage of their userspace boot (which is > > > > > > > > weird, it should be set to 1 as early as possible), and > > > > > > > > interestingly I cannot find them resetting it back to 0!. Maybe > > > > > > > > they set rcu_normal to 1? But I cannot find that either. Vlad? :P > > > > > > > > > > > > > > Interesting. Though this is consistent with Antti's commit log, > > > > > > > where he talks about expediting grace periods but not unexpediting > > > > > > > them. > > > > > > > > > > > > > Do you think we need to unexpedite it? :)))) > > > > > > > > > > Android runs on smallish systems, so quite possibly not! > > > > > > > > > We keep it enabled and never unexpedite it. The reason is a performance. I > > > > have done some app-launch time analysis with enabling and disabling of it. > > > > > > > > An expedited case is much better when it comes to app launch time. It > > > > requires ~25% less time to run an app comparing with unexpedited variant. > > > > So we have a big gain here. > > > > > > Wow, that's huge. I wonder if you can dig deeper and find out why that is so > > > as the callbacks may need to be synchronize_rcu_expedited() then, as it could > > > be slowing down other usecases! I find it hard to believe, real-time > > > workloads will run better without those callbacks being always-expedited if > > > it actually gives back 25% in performance! > > > > > I can dig further, but on a high level i think there are some spots > > which show better performance if expedited is set. I mean synchronize_rcu() > > becomes as "less blocking a context" from a time point of view. > > > > The problem of a regular synchronize_rcu() is - it can trigger a big latency > > delays for a caller. For example for nocb case we do not know where in a list > > our callback is located and when it is invoked to unblock a caller. > > > > I have already mentioned somewhere. Probably it makes sense to directly wake-up > > callers from the GP kthread instead and not via nocb-kthread that invokes our callbacks > > one by one. > > Looking forward to your optimization, I wonder if to overcome the issue Paul > mentioned about wake up overhead, whether it is possible to find out how many > tasks there are to wake without much overhead, and for the common case of > likely one task to wake up which is doing a synchronize_rcu(), wake that up. > But there could be dragons.. A per-rcu_node count of the number of tasks needing wakeups might work. But for best results, there would be an array of such numbers indexed by the low-order bits of the grace-period number (excluding the bottom status bits). The callback-offloading code uses such arrays, for example, though not for counts of sleeping tasks. (There cannot be that many rcuo kthreads per group, so there has been no need to count them.) Thanx, Paul