Re: [PATCH bpf-next 0/4] Reduce overhead of LSMs with static calls

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 10, 2023 at 9:03 PM Paul Moore <paul@xxxxxxxxxxxxxx> wrote:
>
> On Thu, Feb 9, 2023 at 11:56 AM Kees Cook <keescook@xxxxxxxxxxxx> wrote:
> > On Fri, Jan 27, 2023 at 03:16:38PM -0500, Paul Moore wrote:
> > > On Thu, Jan 19, 2023 at 6:10 PM KP Singh <kpsingh@xxxxxxxxxx> wrote:
> > > >
> > > > # Background
> > > >
> > > > LSM hooks (callbacks) are currently invoked as indirect function calls. These
> > > > callbacks are registered into a linked list at boot time as the order of the
> > > > LSMs can be configured on the kernel command line with the "lsm=" command line
> > > > parameter.
> > >
> > > Thanks for sending this KP.  I had hoped to make a proper pass through
> > > this patchset this week but I ended up getting stuck trying to wrap my
> > > head around some network segmentation offload issues and didn't quite
> > > make it here.  Rest assured it is still in my review queue.
> > >
> > > However, I did manage to take a quick look at the patches and one of
> > > the first things that jumped out at me is it *looks* like this
> > > patchset is attempting two things: fix a problem where one LSM could
> > > trample another (especially problematic with the BPF LSM due to its
> > > nature), and reduce the overhead of making LSM calls.  I realize that
> > > in this patchset the fix and the optimization are heavily
> > > intermingled, but I wonder what it would take to develop a standalone
> > > fix using the existing indirect call approach?  I'm guessing that is
> > > going to potentially be a pretty significant patch, but if we could
> > > add a little standardization to the LSM hooks without adding too much
> > > in the way of code complexity or execution overhead I think that might
> > > be a win independent of any changes to how we call the hooks.
> > >
> > > Of course this could be crazy too, but I'm the guy who has to ask
> > > these questions :)
> >
> > Hm, I am expecting this patch series to _not_ change any semantics of
> > the LSM "stack". I would agree: nothing should change in this series, as
> > it should be strictly a mechanical change from "iterate a list of
> > indirect calls" to "make a series of direct calls". Perhaps I missed
> > a logical change?
>

There is no logical change in the 2nd patch that introduces static
calls. There is however a logical change in the fourth patch (as you
noticed) which allows some hooks to register themselves as disabled by
default. This reduces the buggy side effects we have currently with
BPF LSM.

> I might be missing something too, but I'm thinking of patch 4/4 in
> this series that starts with this sentence:

Patch 4/4 is the semantic change but we do need that for both a
performant BPF LSM and eliminating the side effects.

>
>  "BPF LSM hooks have side-effects (even when a default value is
>   returned), as some hooks end up behaving differently due to
>   the very presence of the hook."
>
> Ignoring the static call changes for a moment, I'm curious what it
> would look like to have a better mechanism for handling things like
> this.  What would it look like if we expanded the individual LSM error
> reporting back to the LSM layer to have a bit more information, e.g.
> "this LSM erred, but it is safe to continue evaluating other LSMs" and
> "this LSM erred, and it was too severe to continue evaluating other

I tried proposing an idea in
https://patchwork.kernel.org/project/netdevbpf/patch/20220609234601.2026362-1-kpsingh@xxxxxxxxxx/
 as an LSM_HOOK_NO_EFFECT but that did not seemed to have stuck.

> LSMs"?  Similarly, would we want to expand the hook registration to
> have more info, e.g. "run this hook even when other LSMs have failed"
> and "if other LSMs have failed, do not run this hook"?
>
> I realize that loading a BPF LSM is a privileged operation so we've
> largely mitigated the risk there, but with stacking on it's way to
> being more full featured, and IMA slowly working its way to proper LSM
> status, it seems to me like having a richer, and proper way to handle
> individual LSM failures would be a good thing.  I feel like patch 4/4
> definitely hints at this, but I could be mistaken.
>
> --
> paul-moore.com





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux