Re: [PATCH v5 22/23] integrity: Move integrity functions to the LSM infrastructure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Nov 16, 2023 at 5:08 AM Roberto Sassu
<roberto.sassu@xxxxxxxxxxxxxxx> wrote:
> On Wed, 2023-11-15 at 23:33 -0500, Paul Moore wrote:
> > On Nov  7, 2023 Roberto Sassu <roberto.sassu@xxxxxxxxxxxxxxx> wrote:

...

> > > +/*
> > > + * Perform the initialization of the 'integrity', 'ima' and 'evm' LSMs to
> > > + * ensure that the management of integrity metadata is working at the time
> > > + * IMA and EVM hooks are registered to the LSM infrastructure, and to keep
> > > + * the original ordering of IMA and EVM functions as when they were hardcoded.
> > > + */
> > >  static int __init integrity_lsm_init(void)
> > >  {
> > > +   const struct lsm_id *lsmid;
> > > +
> > >     iint_cache =
> > >         kmem_cache_create("iint_cache", sizeof(struct integrity_iint_cache),
> > >                           0, SLAB_PANIC, iint_init_once);
> > > +   /*
> > > +    * Obtain either the IMA or EVM LSM ID to register integrity-specific
> > > +    * hooks under that LSM, since there is no LSM ID assigned to the
> > > +    * 'integrity' LSM.
> > > +    */
> > > +   lsmid = ima_get_lsm_id();
> > > +   if (!lsmid)
> > > +           lsmid = evm_get_lsm_id();
> > > +   /* No point in continuing, since both IMA and EVM are disabled. */
> > > +   if (!lsmid)
> > > +           return 0;
> > > +
> > > +   security_add_hooks(integrity_hooks, ARRAY_SIZE(integrity_hooks), lsmid);
> >
> > Ooof.  I understand, or at least I think I understand, why the above
> > hack is needed, but I really don't like the idea of @integrity_hooks
> > jumping between IMA and EVM depending on how the kernel is configured.
> >
> > Just to make sure I'm understanding things correctly, the "integrity"
> > LSM exists to ensure the proper hook ordering between IMA/EVM, shared
> > metadata management for IMA/EVM, and a little bit of a hack to solve
> > some kernel module loading issues with signatures.  Is that correct?
> >
> > I see that patch 23/23 makes some nice improvements to the metadata
> > management, moving them into LSM security blobs, but it appears that
> > they are still shared, and thus the requirement is still there for
> > an "integrity" LSM to manage the shared blobs.
>
> Yes, all is correct.

Thanks for the clarification, more on this below.

> > I'd like to hear everyone's honest opinion on this next question: do
> > we have any hope of separating IMA and EVM so they are independent
> > (ignore the ordering issues for a moment), or are we always going to
> > need to have the "integrity" LSM to manage shared resources, hooks,
> > etc.?
>
> I think it should not be technically difficult to do it. But, it would
> be very important to understand all the implications of doing those
> changes.
>
> Sorry, for now I don't see an immediate need to do that, other than
> solving this LSM naming issue. I tried to find the best solution I
> could.

I first want to say that I think you've done a great job thus far, and
I'm very grateful for the work you've done.  We can always use more
help in the kernel security space and I'm very happy to see your
contributions - thank you :)

I'm concerned about the integrity LSM because it isn't really a LSM,
it is simply an implementation artifact from a time when only one LSM
was enabled.  Now that we have basic support for stacking LSMs, as we
promote integrity/IMA/EVM I think this is the perfect time to move
away from the "integrity" portion and integrate the necessary
functionality into the IMA and EVM LSMs.  This is even more important
now that we are looking at making the LSMs more visible to userspace
via syscalls; how would you explain to a developer or user the need
for an "integrity" LSM along with the IMA and EVM LSMs?

Let's look at the three things the the integrity code provides in this patchset:

* IMA/EVM hook ordering

For better or worse, we have requirements on LSM ordering today that
are enforced only by convention, the BPF LSM being the perfect
example.  As long as we document this in Kconfig I think we are okay.

* Shared metadata

Looking at the integrity_iint_cache struct at the end of your patchset
I see the following:

  struct integrity_iint_cache {
    struct mutex mutex;
    struct inode *inode;
    u64 version;
    unsigned long flags;
    unsigned long measured_pcrs;
    unsigned long atomic_flags;
    enum integrity_status ima_file_status:4;
    enum integrity_status ima_mmap_status:4;
    enum integrity_status ima_bprm_status:4;
    enum integrity_status ima_read_status:4;
    enum integrity_status ima_creds_status:4;
    enum integrity_status evm_status:4;
    struct ima_digest_data *ima_hash;
  };

Now that we are stashing the metadata in the inode, we should be able
to remove the @inode field back pointer.  It seems like we could
duplicate @mutex and @version without problem.

I only see the @measured_pcrs, @atomic_flags used in the IMA code.

I only see the @ima_XXX_status fields using in the IMA code, and the
@evm_status used in the EVM code.

I only see the @ima_hash field used by the IMA code.

I do see both IMA and EVM using the @flags field, but only one case
(IMA_NEW_FILE) where one LSM (EVM) looks for another flags (IMA).  I'm
not sure how difficult that would be to untangle, but I imagine we
could do something here; if we had to, we could make EVM be dependent
on IMA in Kconfig and add a function call to check on the inode
status.  Although I hope we could find a better solution.

* Kernel module loading hook (integrity_kernel_module_request(...))

My guess is that this is really an IMA hook, but I can't say for
certain.  If it is needed for EVM we could always duplicate it across
the IMA and EVM LSMs, it is trivially small and one extra strcmp() at
kernel module load time doesn't seem awful to me.

-- 
paul-moore.com




[Index of Archives]     [Selinux Refpolicy]     [Linux SGX]     [Fedora Users]     [Fedora Desktop]     [Yosemite Photos]     [Yosemite Camping]     [Yosemite Campsites]     [KDE Users]     [Gnome Users]

  Powered by Linux