On Wed, Aug 14, 2019 at 9:33 AM Ondrej Mosnacek <omosnace@xxxxxxxxxx> wrote: > > As noted in Documentation/atomic_t.txt, if we don't need the RMW atomic > operations, we should only use READ_ONCE()/WRITE_ONCE() + > smp_rmb()/smp_wmb() where necessary (or the combined variants > smp_load_acquire()/smp_store_release()). > > This patch converts the sidtab code to use regular u32 for the counter > and reverse lookup cache and use the appropriate operations instead of > atomic_get()/atomic_set(). Note that when reading/updating the reverse > lookup cache we don't need memory barriers as it doesn't need to be > consistent or accurate. We can now also replace some atomic ops with > regular loads (when under spinlock) and stores (for conversion target > fields that are always accessed under the master table's spinlock). > > We can now also bump SIDTAB_MAX to U32_MAX as we can use the full u32 > range again. > > Suggested-by: Jann Horn <jannh@xxxxxxxxxx> > Signed-off-by: Ondrej Mosnacek <omosnace@xxxxxxxxxx> > Reviewed-by: Jann Horn <jannh@xxxxxxxxxx> > --- > > v2: Added comments detailing access semantics of sidtab fields. > > security/selinux/ss/sidtab.c | 48 ++++++++++++++++-------------------- > security/selinux/ss/sidtab.h | 19 ++++++++++---- > 2 files changed, 35 insertions(+), 32 deletions(-) Sorry for the delay on this, it was a casualty of LSS-NA. Regardless, this looks better, I just merged it into selinux/next - thanks! -- paul moore www.paul-moore.com