On Tue, Nov 05, 2024 at 12:30:37PM -0600, Haris Okanovic wrote: > Relaxed poll until desired mask/value is observed at the specified > address or timeout. > > This macro is a specialization of the generic smp_cond_load_relaxed(), > which takes a simple mask/value condition (vcond) instead of an > arbitrary expression. It allows architectures to better specialize the > implementation, e.g. to enable wfe() polling of the address on arm. This doesn't make sense to me. The existing smp_cond_load() functions already use wfe on arm64 and I don't see why we need a special helper just to do a mask. > Signed-off-by: Haris Okanovic <harisokn@xxxxxxxxxx> > --- > include/asm-generic/barrier.h | 25 +++++++++++++++++++++++++ > 1 file changed, 25 insertions(+) > > diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h > index d4f581c1e21d..112027eabbfc 100644 > --- a/include/asm-generic/barrier.h > +++ b/include/asm-generic/barrier.h > @@ -256,6 +256,31 @@ do { \ > }) > #endif > > +/** > + * smp_vcond_load_relaxed() - (Spin) wait until an expected value at address > + * with no ordering guarantees. Spins until `(*addr & mask) == val` or > + * `nsecs` elapse, and returns the last observed `*addr` value. > + * > + * @nsecs: timeout in nanoseconds > + * @addr: pointer to an integer > + * @mask: a bit mask applied to read values > + * @val: Expected value with mask > + */ > +#ifndef smp_vcond_load_relaxed I know naming is hard, but "vcond" is especially terrible. Perhaps smp_cond_load_timeout()? Will