Re: [PATCH v4 00/14] security: digest_cache LSM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2024-06-18 at 19:20 -0400, Paul Moore wrote:
> On Mon, Apr 15, 2024 at 10:25 AM Roberto Sassu
> <roberto.sassu@xxxxxxxxxxxxxxx> wrote:
> > 
> > From: Roberto Sassu <roberto.sassu@xxxxxxxxxx>
> > 
> > Integrity detection and protection has long been a desirable feature, to
> > reach a large user base and mitigate the risk of flaws in the software
> > and attacks.
> > 
> > However, while solutions exist, they struggle to reach the large user
> > base, due to requiring higher than desired constraints on performance,
> > flexibility and configurability, that only security conscious people are
> > willing to accept.
> > 
> > This is where the new digest_cache LSM comes into play, it offers
> > additional support for new and existing integrity solutions, to make
> > them faster and easier to deploy.
> > 
> > The full documentation with the motivation and the solution details can be
> > found in patch 14.
> > 
> > The IMA integration patch set will be introduced separately. Also a PoC
> > based on the current version of IPE can be provided.
> 
> I'm not sure we want to implement a cache as a LSM.  I'm sure it would
> work, but historically LSMs have provided some form of access control,
> measurement, or other traditional security service.  A digest cache,
> while potentially useful for a variety of security related
> applications, is not a security service by itself, it is simply a file
> digest storage mechanism.

Uhm, currently the digest_cache LSM is heavily based on the LSM
infrastructure:

static struct security_hook_list digest_cache_hooks[] __ro_after_init = {
	LSM_HOOK_INIT(inode_alloc_security, digest_cache_inode_alloc_security),
	LSM_HOOK_INIT(inode_free_security, digest_cache_inode_free_security),
	LSM_HOOK_INIT(path_truncate, digest_cache_path_truncate),
	LSM_HOOK_INIT(file_release, digest_cache_file_release),
	LSM_HOOK_INIT(inode_unlink, digest_cache_inode_unlink),
	LSM_HOOK_INIT(inode_rename, digest_cache_inode_rename),
	LSM_HOOK_INIT(inode_post_setxattr, digest_cache_inode_post_setxattr),
	LSM_HOOK_INIT(inode_post_removexattr,
		      digest_cache_inode_post_removexattr),
};

struct lsm_blob_sizes digest_cache_blob_sizes __ro_after_init = {
	.lbs_inode = sizeof(struct digest_cache_security),
	.lbs_file = sizeof(struct digest_cache *),
};

Sure, there could be a different indexing mechanism, although using the
inode security blob seems quite efficient, since resolving the path is
sufficient to find a digest cache.

Also, registering to inode_alloc/free_security allows the digest_cache
LSM to dynamically deallocate data when it is not necessary. In
addition to that, there are a number of hooks to determine whether a
digest cache should be refreshed or not.

In the past, it was part of IMA and known as IMA Digest Lists, and as a
separate module, called DIGLIM.

Both required explicit loading of the file digests are extract from to
the kernel through securityfs. Loading was done by an rpm plugin,
invoked when software is installd/removed.

That didn't look a good idea. DIGLIM does not know when the system is
under memory pressure and when digests can be evicted from memory. All
digests needed to be loaded, leading to having a big database.

I think this shortcoming has now been effectively solved by attaching
the digests to the filesystem. Digests are always there, loadable on
demand, unloadable by the system under memory pressure.

> I think it's fine if an individual LSM wants to implement a file
> digest cache as part of its own functionality, but a generalized file
> digest cache seems like something that should be part of the general
> kernel, and not implemented as a LSM.

If we keep the same design as now, it would be anyway connected to the
filesystem, but reusing the LSM infrastructure makes it very easy as I
don't require any change anywhere else.

Sure, it is not doing access control, but I haven't find another good
way to achieve the same. Do you have anything more specific in mind?

Thanks

Roberto






[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux Kernel]     [Linux Kernel Hardening]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux