In commit 66f8e2f03c02 ("selinux: sidtab reverse lookup hash table") the corresponding load is moved under the spin lock, so there is no race possible and we can read the count directly. The smp_store_release() is still needed to avoid racing with the lock-free readers. Signed-off-by: Ondrej Mosnacek <omosnace@xxxxxxxxxx> --- security/selinux/ss/sidtab.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/security/selinux/ss/sidtab.c b/security/selinux/ss/sidtab.c index a308ce1e6a13..f90397284a57 100644 --- a/security/selinux/ss/sidtab.c +++ b/security/selinux/ss/sidtab.c @@ -276,8 +276,7 @@ int sidtab_context_to_sid(struct sidtab *s, struct context *context, if (*sid) goto out_unlock; - /* read entries only after reading count */ - count = smp_load_acquire(&s->count); + count = s->count; convert = s->convert; /* bail out if we already reached max entries */ -- 2.25.2