On Thu, Jan 09, 2025 at 07:56:51AM -0800, Joe Perches wrote: > On Wed, 2025-01-08 at 11:24 -0800, David Reaver wrote: > > The deprecated_apis map was created in [1] so checkpatch would flag > > deprecated RCU APIs. These deprecated APIs have since been removed from the > > kernel. This patch removes them from this map so checkpatch doesn't waste > > time looking for them, and so readers of checkpatch looking for deprecated > > APIs don't waste time searching for them. > > Acked-by: Joe Perches <joe@xxxxxxxxxxx> > > Maybe remove the references from rcupdateup.h one day too. Good point, please see below. Some instances remain in Documentation/RCU/RTFP.txt, but these are needed to record the history. Thanx, Paul ------------------------------------------------------------------------ commit a8280286a6425f26785aeedfe9b209a65ca1d6fd Author: Paul E. McKenney <paulmck@xxxxxxxxxx> Date: Thu Jan 9 08:52:15 2025 -0800 rcu: Remove references to old grace-period-wait primitives The rcu_barrier_sched(), synchronize_sched(), and synchronize_rcu_bh() RCU API members have been gone for many years. This commit therefore removes non-historical instances of them. Reported-by: Joe Perches <joe@xxxxxxxxxxx> Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxx> diff --git a/Documentation/RCU/rcubarrier.rst b/Documentation/RCU/rcubarrier.rst index 6da7f66da2a80..12a7b059654f7 100644 --- a/Documentation/RCU/rcubarrier.rst +++ b/Documentation/RCU/rcubarrier.rst @@ -329,10 +329,7 @@ Answer: was first added back in 2005. This is because on_each_cpu() disables preemption, which acted as an RCU read-side critical section, thus preventing CPU 0's grace period from completing - until on_each_cpu() had dealt with all of the CPUs. However, - with the advent of preemptible RCU, rcu_barrier() no longer - waited on nonpreemptible regions of code in preemptible kernels, - that being the job of the new rcu_barrier_sched() function. + until on_each_cpu() had dealt with all of the CPUs. However, with the RCU flavor consolidation around v4.20, this possibility was once again ruled out, because the consolidated diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index 6672e55deeaa4..9b05db8ff0619 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -800,11 +800,9 @@ do { \ * sections, invocation of the corresponding RCU callback is deferred * until after the all the other CPUs exit their critical sections. * - * In v5.0 and later kernels, synchronize_rcu() and call_rcu() also - * wait for regions of code with preemption disabled, including regions of - * code with interrupts or softirqs disabled. In pre-v5.0 kernels, which - * define synchronize_sched(), only code enclosed within rcu_read_lock() - * and rcu_read_unlock() are guaranteed to be waited for. + * Both synchronize_rcu() and call_rcu() also wait for regions of code + * with preemption disabled, including regions of code with interrupts or + * softirqs disabled. * * Note, however, that RCU callbacks are permitted to run concurrently * with new RCU read-side critical sections. One way that this can happen @@ -859,11 +857,10 @@ static __always_inline void rcu_read_lock(void) * rcu_read_unlock() - marks the end of an RCU read-side critical section. * * In almost all situations, rcu_read_unlock() is immune from deadlock. - * In recent kernels that have consolidated synchronize_sched() and - * synchronize_rcu_bh() into synchronize_rcu(), this deadlock immunity - * also extends to the scheduler's runqueue and priority-inheritance - * spinlocks, courtesy of the quiescent-state deferral that is carried - * out when rcu_read_unlock() is invoked with interrupts disabled. + * This deadlock immunity also extends to the scheduler's runqueue + * and priority-inheritance spinlocks, courtesy of the quiescent-state + * deferral that is carried out when rcu_read_unlock() is invoked with + * interrupts disabled. * * See rcu_read_lock() for more information. */