Re: [PATCH v4] selinux: cache the SID -> context string translation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Nov 15, 2019 at 1:42 AM Paul Moore <paul@xxxxxxxxxxxxxx> wrote:
> On Mon, Nov 11, 2019 at 10:40 AM Ondrej Mosnacek <omosnace@xxxxxxxxxx> wrote:
> > Translating a context struct to string can be quite slow, especially if
> > the context has a lot of category bits set. This can cause quite
> > noticeable performance impact in situations where the translation needs
> > to be done repeatedly. A common example is a UNIX datagram socket with
> > the SO_PASSSEC option enabled, which is used e.g. by systemd-journald
> > when receiving log messages via datagram socket. This scenario can be
> > reproduced with:
> >
> >     cat /dev/urandom | base64 | logger &
> >     timeout 30s perf record -p $(pidof systemd-journald) -a -g
> >     kill %1
> >     perf report -g none --pretty raw | grep security_secid_to_secctx
> >
> > Before the caching introduced by this patch, computing the context
> > string (security_secid_to_secctx() function) takes up ~65% of
> > systemd-journald's CPU time (assuming a context with 1024 categories
> > set and Fedora x86_64 release kernel configs). After this patch
> > (assuming near-perfect cache hit ratio) this overhead is reduced to just
> > ~2%.
> >
> > This patch addresses the issue by caching a certain number (compile-time
> > configurable) of recently used context strings to speed up repeated
> > translations of the same context, while using only a small amount of
> > memory.
> >
> > The cache is integrated into the existing sidtab table by adding a field
> > to each entry, which when not NULL contains an RCU-protected pointer to
> > a cache entry containing the cached string. The cache entries are kept
> > in a linked list sorted according to how recently they were used. On a
> > cache miss when the cache is full, the least recently used entry is
> > removed to make space for the new entry.
> >
> > The patch migrates security_sid_to_context_core() to use the cache (also
> > a few other functions where it was possible without too much fuss, but
> > these mostly use the translation for logging in case of error, which is
> > rare).
> >
> > Link: https://bugzilla.redhat.com/show_bug.cgi?id=1733259
> > Cc: Michal Sekletar <msekleta@xxxxxxxxxx>
> > Signed-off-by: Ondrej Mosnacek <omosnace@xxxxxxxxxx>
> > ---
> >
> > Changes in v4:
> >  - use rcu_dereference_protected() instead of rcu_dereference_raw() in
> >    sidtab_sid2str_put()
> >  - fix typo in comment
> >  - remove unnecessary rcu_head_init() call
> >
> > Changes in v3:
> >  - add rcu@xxxxxxxxxxxxxxx and Paul McKenney to Cc for review of the RCU
> >    logic
> >  - add __rcu annotation to the cache entry pointer (sidtab.c now passes
> >    sparse checks with C=1)
> >
> > Changes in v2:
> >  - skip sidtab_sid2str_put() when in non-task context to prevent
> >    deadlock while avoiding the need to lock the spinlock with
> >    irqsave/-restore (which is slower)
> >
> >  security/selinux/Kconfig       |  11 ++
> >  security/selinux/ss/services.c | 138 +++++++++++++++----------
> >  security/selinux/ss/sidtab.c   | 179 +++++++++++++++++++++++++++------
> >  security/selinux/ss/sidtab.h   |  58 +++++++++--
> >  4 files changed, 294 insertions(+), 92 deletions(-)
>
> ...
>
> > diff --git a/security/selinux/ss/sidtab.c b/security/selinux/ss/sidtab.c
> > index 7d49994e8d5f..6d6ce1c43b49 100644
> > --- a/security/selinux/ss/sidtab.c
> > +++ b/security/selinux/ss/sidtab.c
> > @@ -492,3 +528,88 @@ void sidtab_destroy(struct sidtab *s)
> >
> >         sidtab_destroy_tree(s->roots[level], level);
> >  }
> > +
> > +#if CONFIG_SECURITY_SELINUX_SID2STR_CACHE_SIZE > 0
> > +
> > +void sidtab_sid2str_put(struct sidtab *s, struct sidtab_entry *entry,
> > +                       const char *str, u32 str_len)
> > +{
> > +       struct sidtab_str_cache *cache, *victim;
> > +
> > +       /* do not cache invalid contexts */
> > +       if (entry->context.len)
> > +               return;
> > +
> > +       /*
> > +        * Skip the put operation when in non-task context to avoid the need
> > +        * to disable interrupts while holding s->cache_lock.
> > +        */
> > +       if (!in_task())
> > +               return;
> > +
> > +       spin_lock(&s->cache_lock);
> > +
> > +       cache = rcu_dereference_protected(entry->cache,
> > +                                         lockdep_is_held(&s->cache_lock));
> > +       if (cache) {
> > +               /* entry in cache - just bump to the head of LRU list */
> > +               list_move(&cache->lru_member, &s->cache_lru_list);
> > +               goto out_unlock;
> > +       }
> > +
> > +       cache = kmalloc(sizeof(struct sidtab_str_cache) + str_len, GFP_ATOMIC);
> > +       if (!cache)
> > +               goto out_unlock;
> > +
> > +       if (s->cache_free_slots == 0) {
> > +               /* pop a cache entry from the tail and free it */
> > +               victim = container_of(s->cache_lru_list.prev,
> > +                                     struct sidtab_str_cache, lru_member);
> > +               list_del(&victim->lru_member);
> > +               kfree_rcu(victim, rcu_member);
>
> We could move the kfree_rcu() down to after we drop the spinlock,
> right?  It's likely not a big deal, but since the whole point of this
> patch is performance improvements it seems like it might be nice. ;)

I could be wrong, but I think kfree_rcu() just (always?) appends the
object to the RCU list and defers the deallocation for later (and that
should be pretty quick). But actually... since the kfree_rcu() is not
called under RCU read lock here, I should at least move it below the
next line, which still dereferences "victim". And at that point I
could move it all the way after spin_unlock() as you suggest...

>
> > +               rcu_assign_pointer(victim->parent->cache, NULL);
> > +       } else {
> > +               s->cache_free_slots--;
> > +       }
> > +       cache->parent = entry;
> > +       cache->len = str_len;
> > +       memcpy(cache->str, str, str_len);
> > +       list_add(&cache->lru_member, &s->cache_lru_list);
> > +
> > +       rcu_assign_pointer(entry->cache, cache);
> > +
> > +out_unlock:
> > +       spin_unlock(&s->cache_lock);
> > +}
>
> --
> paul moore
> www.paul-moore.com

-- 
Ondrej Mosnacek <omosnace at redhat dot com>
Software Engineer, Security Technologies
Red Hat, Inc.





[Index of Archives]     [Selinux Refpolicy]     [Linux SGX]     [Fedora Users]     [Fedora Desktop]     [Yosemite Photos]     [Yosemite Camping]     [Yosemite Campsites]     [KDE Users]     [Gnome Users]

  Powered by Linux