Hi Ming On 09/18/2018 06:19 PM, Ming Lei wrote: > + unsigned long __percpu *percpu_count; > + > + WARN_ON_ONCE(__ref_is_percpu(ref, &percpu_count)); > + > + /* get one extra ref for avoiding race with .release */ > + rcu_read_lock_sched(); > + atomic_long_add(1, &ref->count); > + rcu_read_unlock_sched(); > + } The rcu_read_lock_sched here is redundant. We have been in the critical section of a spin_lock_irqsave. The atomic_long_add(1, &ref->count) may have two result. 1. ref->count > 1 it will not drop to zero any more. 2. ref->count == 1 it has dropped to zero and .release may be running. Thanks Jianchao