Re: [RFC 3/3] ima: make the integrity inode cache per namespace

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 11/29/21 10:35, Serge E. Hallyn wrote:
On Mon, Nov 29, 2021 at 09:46:55AM -0500, James Bottomley wrote:
On Mon, 2021-11-29 at 15:22 +0100, Christian Brauner wrote:
On Mon, Nov 29, 2021 at 09:10:29AM -0500, James Bottomley wrote:
On Mon, 2021-11-29 at 08:53 -0500, Stefan Berger wrote:
On 11/29/21 07:50, James Bottomley wrote:
On Sun, 2021-11-28 at 22:58 -0600, Serge E. Hallyn wrote:
On Sat, Nov 27, 2021 at 04:45:49PM +0000, James Bottomley
wrote:
Currently we get one entry in the IMA log per unique file
event.  So, if you have a measurement policy and it
measures a particular binary it will not get measured again
if it is subsequently executed. For Namespaced IMA, the
correct behaviour seems to be to log once per inode per
namespace (so every unique execution in a namespace gets a
separate log entry).  Since logging once per inode per
namespace is
I suspect I'll need to do a more in depth reading of the
existing code, but I'll ask the lazy question anyway (since
you say "the correct behavior seems to be") - is it actually
important that files which were appraised under a parent
namespace's policy already should be logged again?
I think so.  For a couple of reasons, assuming the namespace
eventually gets its own log entries, which the next incremental
patch proposed to do by virtualizing the securityfs
entries.  If you don't do this:
To avoid duplicate efforts, an implementation of a virtualized
securityfs is in this series here:

https://github.com/stefanberger/linux-ima-namespaces/commits/v5.15%2Bimans.20211119.v3

It starts with 'securityfs: Prefix global variables with
secruityfs_'
That's quite a big patch series.  I already actually implemented
this as part of the RFC for getting the per namespace measurement
log.  The attached is basically what I did.

Most of the time we don't require namespacing the actual virtualfs
file, because it's world readable.  IMA has a special requirement
in this regard because the IMA files should be readable (and
writeable when we get around to policy updates) by the admin of the
namespace but their protection is 0640 or 0440.  I thought the
simplest solution would be to make an additional flag that coped
with the permissions and a per-inode flag way of making the file as
"accessible by userns admin".  Doing something simple like this
gives a much smaller diffstat:
That's a NAK from me. Stefan's series might be bigger but it does
things correctly. I appreciate the keep it simple attitude but no. I
won't speciale-case securityfs or similar stuff in core vfs helpers.
Well, there's a reason it's an unpublished patch.  However, the more
important point is that namespacing IMA requires discussion of certain
points that we never seem to drive to a conclusion.  Using the akpm
method, I propose simple patches that drive the discussion.  I think
the points are:

    1. Should IMA be its own namespace or tied to the user namespace?  The
       previous patches all took the separate Namespace approach, but I
       think that should be reconsidered now keyrings are in the user
       namespace.
Well that purely depends on the needed scope.

The audit container identifier is a neat thing.  But it absolutely must
be settable, so seems to conflict with your needs.

Your patch puts an identifier on the user_namespace.  I'm not quite sure,
does that satisfy Stefan's needs?  A new ima ns if and only if there is a
new user ns?

I think you two need to get together and discuss the requirements, and come
back with a brief but very precise document explaining what you need.

What would those want who look at audit messages? [Idk] Would they want a constant identifier for IMA audit messages in the audit log across all restarts of a container? Presumably that would make quick queries across restarts much easier. Or could they live with an audit message emitted from the container runtime indicating that this time the (IMA) audit messages from this container will have this UUID here?

I guess both would 'work.'


Are you both looking at the same use case?  Who is consuming the audit
log, and to what end?  Container administrators?  Any time they log in?
How do they assure themselves that the securityfs file they're reading
hasn't been overmounted?

The question is also should there only be one identifier or can there be two different one (one from audit patch series and uuid of user namespace).



I need to find a document to read about IMA's usage of PCRs.  For
namespacing, are you expecting each container to be hooked up to a
swtmp instance so they have their own PCR they can use?

It's complicated and there's a bit more to this... I would try to architect it in a way that the IMA system policy can cover what's going on inside IMA namespaces, i.e., audit and measure and appraise file accesses occurring in those namespace. We call it hierarchical processing ( https://github.com/stefanberger/linux-ima-namespaces/commit/e88dc84ec97753fd65d302ee1bf03951001ab48f ) where file access are evaluated against the current namespace's policy and then also evaluated against those of parent namespaces back to the init_ima_ns. The goal is to avoid evasion of measurements etc. by the user just by spawning new IMA namespaces. I think logging into the IMA system log will not scale well if there are hundreds of containers on the system using IMA and logging into the system log and hammering the TPM. So, the answer then is write your policy in such a way that it doesn't cover the IMA/user namespaces (containers) and have each container have its own IMA policy and IMA log and and an optional vTPM. So my answer would be 'optional swtpm.'

   Stefan






[Index of Archives]     [Cgroups]     [Netdev]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux