On Fri, 10 Jan 2025 20:25:57 -0800 Suren Baghdasaryan <surenb@xxxxxxxxxx> > -bool __refcount_add_not_zero(int i, refcount_t *r, int *oldp) > +bool __refcount_add_not_zero_limited(int i, refcount_t *r, int *oldp, > + int limit) > { > int old = refcount_read(r); > > do { > if (!old) > break; > + > + if (statically_true(limit == INT_MAX)) > + continue; > + > + if (i > limit - old) { > + if (oldp) > + *oldp = old; > + return false; > + } > } while (!atomic_try_cmpxchg_relaxed(&r->refs, &old, old + i)); The acquire version should be used, see atomic_long_try_cmpxchg_acquire() in kernel/locking/rwsem.c. Why not use the atomic_long_t without bothering to add this limited version?