Re: [PATCH] selinux: sidtab: reverse lookup hash table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Oct 30, 2019 at 9:06 PM Jeffrey Vander Stoep <jeffv@xxxxxxxxxx> wrote:
> On Wed, Oct 30, 2019 at 9:02 PM Stephen Smalley <sds@xxxxxxxxxxxxx> wrote:
> > On 10/30/19 3:07 PM, Jeffrey Vander Stoep wrote:
> > > On Wed, Oct 30, 2019 at 7:57 PM Stephen Smalley <sds@xxxxxxxxxxxxx> wrote:

<snip>

> > >> Wondering if we still need at least a write barrier in the code that
> > >> adds a node prior to hash_add() to ensure the node contents are sane.
> > >
> > > I'm not sure what you mean. Are you concerned about compiler reordering?
> >
> > Compiler or processor.  Are we guaranteed that node->sid and
> > node->context are sane before the node is visible to the readers?  The
> > old sidtab code (before Ondrej's rewrite) used to issue a full wmb()
> > after setting the node->sid, node->context, and node->next prior to
> > linking the node into the list.
>
> Got it, thanks. I'll look into that and include it in v2 if necessary.

Note that the code is still using memory barriers in a similar way.
Specifically, it uses smp_load_acquire() and smp_store_release() to
correctly order reading/writing the count and the content of the
subtree needed to access the corresponding number of entries (see
commit 116f21bb967f ("selinux: avoid atomic_t usage in sidtab")).
These are helper macros that basically just do
READ_ONCE()/WRITE_ONCE() in combination with spm_mb() under the hood.
I think you should be able to use them here as well, but if they turn
out not to be suitable, you can just open code the combination as
needed.

<snip>

-- 
Ondrej Mosnacek <omosnace at redhat dot com>
Software Engineer, Security Technologies
Red Hat, Inc.





[Index of Archives]     [Selinux Refpolicy]     [Linux SGX]     [Fedora Users]     [Fedora Desktop]     [Yosemite Photos]     [Yosemite Camping]     [Yosemite Campsites]     [KDE Users]     [Gnome Users]

  Powered by Linux