On Thu, Jun 06, 2024 at 09:49:59AM -0700, Paul E. McKenney wrote: > On Thu, Jun 06, 2024 at 09:16:08AM +0530, Neeraj Upadhyay wrote: > > > > > > On 6/6/2024 12:08 AM, Paul E. McKenney wrote: > > > On Wed, Jun 05, 2024 at 02:09:34PM +0200, Frederic Weisbecker wrote: > > >> Le Tue, Jun 04, 2024 at 03:23:48PM -0700, Paul E. McKenney a écrit : > > >>> From: Neeraj Upadhyay <Neeraj.Upadhyay@xxxxxxx> > > >>> > > >>> When all wait heads are in use, which can happen when > > >>> rcu_sr_normal_gp_cleanup_work()'s callback processing > > >>> is slow, any new synchronize_rcu() user's rcu_synchronize > > >>> node's processing is deferred to future GP periods. This > > >>> can result in long list of synchronize_rcu() invocations > > >>> waiting for full grace period processing, which can delay > > >>> freeing of memory. Mitigate this problem by using first > > >>> node in the list as wait tail when all wait heads are in use. > > >>> While methods to speed up callback processing would be needed > > >>> to recover from this situation, allowing new nodes to complete > > >>> their grace period can help prevent delays due to a fixed > > >>> number of wait head nodes. > > >>> > > >>> Signed-off-by: Neeraj Upadhyay <Neeraj.Upadhyay@xxxxxxx> > > >>> Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxx> > > >> > > >> IIRC we agreed that this patch could be a step too far that > > >> made an already not so simple state machine even less simple, > > >> breaking the wait_head based flow. > > > > > > True, which is why we agreed not to submit it into the v6.10 merge window. > > > > > > And I don't recall us saying what merge window to send it to. > > > > > >> Should we postpone this change until it is observed that a workqueue > > >> not being scheduled for 5 grace periods is a real issue? > > > > > > Neeraj, thoughts? Or, better yet, test results? ;-) > > > > Yes I agree that we postpone this change until we see it as a real > > problem. I had run a test to invoke synchronize_rcu() from all CPUs > > on a 96 core system in parallel. I didn't specifically check if this > > scenario was hit. Will run RCU torture test with this change. > > Very well, I will drop this patch with the expectation that you will > re-post it if a problem does arise. > Thank you! We discussed it before and came to conclusion that it adds an extra complexity. Once we hit an issue with delays, we can introduce it and explain a workload which triggers it. -- Uladzislau Rezki