Re: [PATCH] selinux: fix race condition when computing ocontext SIDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Aug 5, 2021 at 10:48 PM Paul Moore <paul@xxxxxxxxxxxxxx> wrote:
> On Wed, Jul 28, 2021 at 10:03 AM Ondrej Mosnacek <omosnace@xxxxxxxxxx> wrote:
[...]
> > diff --git a/security/selinux/ss/services.c b/security/selinux/ss/services.c
> > index 0a5ce001609b..c8db8a3432e4 100644
> > --- a/security/selinux/ss/services.c
> > +++ b/security/selinux/ss/services.c
> > @@ -2369,6 +2369,43 @@ err_policy:
> >         return rc;
> >  }
> >
> > +/**
> > + * ocontext_to_sid - Helper to safely get sid for an ocontext
> > + * @sidtab: SID table
> > + * @c: ocontext structure
> > + * @index: index of the context entry (0 or 1)
> > + * @out_sid: pointer to the resulting SID value
> > + *
> > + * For all ocontexts except OCON_ISID the SID fields are populated
> > + * on-demand when needed. Since updating the SID value is an SMP-sensitive
> > + * operation, this helper must be used to do that safely.
> > + *
> > + * WARNING: This function may return -ESTALE, indicating that the caller
> > + * must retry the operation after re-acquiring the policy pointer!
> > + */
> > +static int ocontext_to_sid(struct sidtab *sidtab, struct ocontext *c,
> > +                          size_t index, u32 *out_sid)
> > +{
> > +       int rc;
> > +       u32 sid;
> > +
> > +       /* Ensure the associated sidtab entry is visible to this thread. */
> > +       sid = smp_load_acquire(&c->sid[index]);
>
> Is there a reason why you decided to use smp_{load,store} to guard
> ocontext.sid[] as opposed to RCU/spinlock?  RCU would allow us to
> avoid the memory barrier in smp_load_acquire() on every call to
> ocontext_to_sid() and it looks like all of the non-exclusive callers
> are already in a RCU protected section so the additional impact on an
> already cached value should be next to nothing.  The spinlock would
> make things slightly more complicated (take the lock, recheck
> ocontext.sid, set/bail, unlock, etc.) but we are already in the slow
> path at that point.

I don't see any sane way to use RCU here - the implicit data
dependency that the memory barrier is guarding us against here is
between the sid field(s) and the memory regions in sidtab that hold
the struct context corresponding to the SID stored in the field. You'd
have to put both the SID value and the sidtab pointer behind some
dynamically allocated structure and that would just be horrible...

I assume that by using spinlock you mean something like:

sid = READ_ONCE(c->sid);
if (!sid) {
        spin_lock(...);
        sidtab_context_to_sid(..., &sid);
        WRITE_ONCE(c->sid, sid);
        spin_unlock(...);
}

...because taking the spinlock every time would obviously be less
efficient than this patch :)

That would, however, not solve the data dependency problem, so you'd
still need smp_*() instead of *_ONCE() and at that point the spinlock
would be redundant (and you're back to exactly what this patch does).

>
> > +       if (!sid) {
> > +               rc = sidtab_context_to_sid(sidtab, &c->context[index], &sid);
> > +               if (rc)
> > +                       return rc;
> > +
> > +               /*
> > +                * Ensure the new sidtab entry is visible to other threads
> > +                * when they see the SID.
> > +                */
> > +               smp_store_release(&c->sid[index], sid);
> > +       }
> > +       *out_sid = sid;
> > +       return 0;
> > +}
>
> --
> paul moore
> www.paul-moore.com
>


--
Ondrej Mosnacek
Software Engineer, Linux Security - SELinux kernel
Red Hat, Inc.




[Index of Archives]     [Selinux Refpolicy]     [Linux SGX]     [Fedora Users]     [Fedora Desktop]     [Yosemite Photos]     [Yosemite Camping]     [Yosemite Campsites]     [KDE Users]     [Gnome Users]

  Powered by Linux