On Thu, Jul 25, 2019 at 3:59 PM Ondrej Mosnacek <omosnace@xxxxxxxxxx> wrote: > As noted in Documentation/atomic_t.txt, if we don't need the RMW atomic > operations, we should only use READ_ONCE()/WRITE_ONCE() + > smp_rmb()/smp_wmb() where necessary (or the combined variants > smp_load_acquire()/smp_store_release()). > > This patch converts the sidtab code to use regular u32 for the counter > and reverse lookup cache and use the appropriate operations instead of > atomic_get()/atomic_set(). Note that when reading/updating the reverse > lookup cache we don't need memory barriers as it doesn't need to be > consistent or accurate. We can now also replace some atomic ops with > regular loads (when under spinlock) and stores (for conversion target > fields that are always accessed under the master table's spinlock). > > We can now also bump SIDTAB_MAX to U32_MAX as we can use the full u32 > range again. > > Suggested-by: Jann Horn <jannh@xxxxxxxxxx> > Signed-off-by: Ondrej Mosnacek <omosnace@xxxxxxxxxx> Looks good to me; you can add "Reviewed-by: Jann Horn <jannh@xxxxxxxxxx>" if you want.