Re: [RFC PATCH 1/2] LSM: Allow dynamically appendable LSM modules.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/27/2023 8:08 AM, Tetsuo Handa wrote:
> Recently, the LSM community is trying to make drastic changes.

I'd call them "significant" or "important" rather than "drastic".

> Crispin Cowan has explained
>
>   It is Linus' comments that spurred me to want to start this undertaking.  He
>   observes that there are many different security approaches, each with their own
>   advocates.  He doesn't want to arbitrate which of them should be "the" Linux
>   security approach, and would rather that Linux can support any of them.
>
>   That is the purpose of this project:  to allow Linux to support a variety of
>   security models, so that security developers don't have to have the "my dog's
>   bigger than your dog" argument, and users can choose the security model that
>   suits their needs.
>
> when the LSM project started [1].
>
> However, Casey Schaufler is trying to make users difficult to choose the
> security model that suits their needs, by requiring LSM ID value which is
> assigned to only LSM modules that succeeded to become in-tree [2].

This statement is demonstrably false, and I'm tired of hearing it.

> Therefore, I'm asking Casey and Paul Moore to change their mind to allow
> assigning LSM ID value to any LSM modules (so that users can choose the
> security model that suits their needs) [3].
>
> I expect that LSM ID value will be assigned to any publicly available LSM
> modules. Otherwise, it is mostly pointless to propose this patch; there
> will be little LSM modules to built into vmlinux; let alone dynamically
> loading as LKM-based LSMs.
>
> Also, KP Singh is trying to replace the linked list with static calls in
> order to reduce overhead of indirect calls [4]. However, this change
> assumed that any LSM modules are built-in. I don't like such assumption
> because I still consider that LSM modules which are not built into vmlinux
> will be wanted by users [5].
>
> Then, Casey told me to supply my implementation of loadable security
> modules [6]. Therefore, I post this patch as basic changes needed for
> allowing dynamically appendable LSM modules (and an example of appendable
> LSM modules). This patch was tested on only x86_64.

Thank you for doing so. I will be mostly off line for the next few weeks,
and will review the proposal fully on my return. I will provide some
initial feedback below.

> Question for KP Singh would be how can we allow dynamically appendable
> LSM modules if current linked list is replaced with static calls with
> minimal-sized array...
>
> Link: https://marc.info/?l=linux-security-module&m=98706471912438&w=2 [1]
> Link: https://lkml.kernel.org/r/20230912205658.3432-2-casey@xxxxxxxxxxxxxxxx [2]
> Link: https://lkml.kernel.org/r/6e1c25f5-b78c-8b4e-ddc3-484129c4c0ec@xxxxxxxxxxxxxxxxxxx [3]
> Link: https://lkml.kernel.org/r/20230918212459.1937798-1-kpsingh@xxxxxxxxxx [4]
> Link: https://lkml.kernel.org/r/ed785c86-a1d8-caff-c629-f8a50549e05b@xxxxxxxxxxxxxxxxxxx [5]
> Link: https://lkml.kernel.org/r/36c7cf74-508f-1690-f86a-bb18ec686fcf@xxxxxxxxxxxxxxxx [6]
> Signed-off-by: Tetsuo Handa <penguin-kernel@xxxxxxxxxxxxxxxxxxx>
> ---
>  include/linux/lsm_hooks.h |   2 +
>  security/security.c       | 107 ++++++++++++++++++++++++++++++++++++++
>  2 files changed, 109 insertions(+)
>
> diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h
> index dcb5e5b5eb13..73db3c41df26 100644
> --- a/include/linux/lsm_hooks.h
> +++ b/include/linux/lsm_hooks.h
> @@ -105,6 +105,8 @@ extern char *lsm_names;
>  
>  extern void security_add_hooks(struct security_hook_list *hooks, int count,
>  				const char *lsm);
> +extern int register_loadable_lsm(struct security_hook_list *hooks, int count,
> +				 const char *lsm);
>  
>  #define LSM_FLAG_LEGACY_MAJOR	BIT(0)
>  #define LSM_FLAG_EXCLUSIVE	BIT(1)
> diff --git a/security/security.c b/security/security.c
> index 23b129d482a7..6c64b7afb251 100644
> --- a/security/security.c
> +++ b/security/security.c
> @@ -74,6 +74,7 @@ const char *const lockdown_reasons[LOCKDOWN_CONFIDENTIALITY_MAX + 1] = {
>  };
>  
>  struct security_hook_heads security_hook_heads __ro_after_init;
> +EXPORT_SYMBOL_GPL(security_hook_heads);

Why disrupt the protection of security_hook_heads? You could easily add

struct security_hook_heads security_loadable_hook_heads
EXPORT_SYMBOL_GPL(security_loadable_hook_heads);

and add the loaded hooks there. A system that does not use loadable
modules would be unaffected by the ability to load modules.

>  static BLOCKING_NOTIFIER_HEAD(blocking_lsm_notifier_chain);
>  
>  static struct kmem_cache *lsm_file_cache;
> @@ -537,6 +538,112 @@ void __init security_add_hooks(struct security_hook_list *hooks, int count,
>  	}
>  }
>  
> +#if defined(CONFIG_STRICT_KERNEL_RWX)
> +#define MAX_RO_PAGES 1024 /* Wild guess. Can be minimized by dynamic allocation. */
> +static struct page *ro_pages[MAX_RO_PAGES]; /* Pages that are marked read-only. */
> +static unsigned int ro_pages_len; /* Number of pages that are marked read-only. */
> +
> +/* Check whether a page containing given address does not have _PAGE_BIT_RW bit. */
> +static bool lsm_test_page_ro(void *addr)
> +{
> +	unsigned int i;
> +	int unused;
> +	struct page *page;
> +
> +	page = (struct page *) lookup_address((unsigned long) addr, &unused);
> +	if (!page)
> +		return false;
> +	if (test_bit(_PAGE_BIT_RW, &(page->flags)))
> +		return true;
> +	for (i = 0; i < ro_pages_len; i++)
> +		if (page == ro_pages[i])
> +			return true;
> +	if (ro_pages_len == MAX_RO_PAGES)
> +		return false;
> +	ro_pages[ro_pages_len++] = page;
> +	return true;
> +}
> +
> +/* Find pages which do not have _PAGE_BIT_RW bit. */
> +static bool check_ro_pages(struct security_hook_list *hooks, int count)
> +{
> +	int i;
> +	struct hlist_head *list = &security_hook_heads.capable;
> +
> +	if (!copy_to_kernel_nofault(list, list, sizeof(void *)))
> +		return true;
> +	for (i = 0; i < count; i++) {
> +		struct hlist_head *head = hooks[i].head;
> +		struct security_hook_list *shp;
> +
> +		if (!lsm_test_page_ro(&head->first))
> +			return false;
> +		hlist_for_each_entry(shp, head, list)
> +			if (!lsm_test_page_ro(&shp->list.next) ||
> +			    !lsm_test_page_ro(&shp->list.pprev))
> +				return false;
> +	}
> +	return true;
> +}
> +#endif

I'm not an expert on modern memory management, but I think introducing
security_loadable_hook_heads would make these functions unnecessary.
Please educate me if I'm wrong.

> +
> +/**
> + * register_loadable_lsm - Add a dynamically appendable module's hooks to the hook lists.
> + * @hooks: the hooks to add
> + * @count: the number of hooks to add
> + * @lsm: the name of the security module
> + *
> + * Each dynamically appendable LSM has to register its hooks with the infrastructure.
> + *
> + * Assumes that this function is called from module_init() function where
> + * call to this function is already serialized by module_mutex lock.
> + */
> +int register_loadable_lsm(struct security_hook_list *hooks, int count,
> +			  const char *lsm)
> +{
> +	int i;
> +	char *cp;
> +
> +	// TODO: Check whether proposed hooks can co-exist with already chained hooks,
> +	//       and bail out here if one of hooks cannot co-exist...
> +
> +#if defined(CONFIG_STRICT_KERNEL_RWX)
> +	// Find pages which needs to make temporarily writable.
> +	ro_pages_len = 0;
> +	if (!check_ro_pages(hooks, count)) {
> +		pr_err("Can't make security_hook_heads again writable. Retry with rodata=off kernel command line option added.\n");
> +		return -EINVAL;
> +	}
> +	pr_info("ro_pages_len=%d\n", ro_pages_len);
> +#endif
> +	// At least "capability" is already included.
> +	cp = kasprintf(GFP_KERNEL, "%s,%s", lsm_names, lsm);
> +	if (!cp) {
> +		pr_err("%s - Cannot get memory.\n", __func__);
> +		return -ENOMEM;
> +	}
> +#if defined(CONFIG_STRICT_KERNEL_RWX)
> +	// Make security_hook_heads (and hooks chained) temporarily writable.
> +	for (i = 0; i < ro_pages_len; i++)
> +		set_bit(_PAGE_BIT_RW, &(ro_pages[i]->flags));
> +#endif
> +	// Register dynamically appendable module's hooks.
> +	for (i = 0; i < count; i++) {
> +		hooks[i].lsm = lsm;
> +		hlist_add_tail_rcu(&hooks[i].list, hooks[i].head);
> +	}
> +#if defined(CONFIG_STRICT_KERNEL_RWX)
> +	// Make security_hook_heads (and hooks chained) again read-only.
> +	for (i = 0; i < ro_pages_len; i++)
> +		clear_bit(_PAGE_BIT_RW, &(ro_pages[i]->flags));
> +#endif
> +	// TODO: Wait for reader side before kfree().
> +	kfree(lsm_names);
> +	lsm_names = cp;
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(register_loadable_lsm);

Most of this code seems unnecessary if you use security_loadable_hook_heads.

There would need to be additions in security.c to invoke the hooks in the new
list, but that would be straightforward. Locking is another matter. I don't see
that addressed here, and I fear that it might have prohibitive performance
impact. Again, I'm not an expert on locking, so you'll need to seek advise
elsewhere.

On a less happy note, you haven't addressed security blobs in any way. You
need to provide a mechanism to allow an LSM to share security blobs with
builtin LSMs and other loadable LSMs.

> +
>  int call_blocking_lsm_notifier(enum lsm_event event, void *data)
>  {
>  	return blocking_notifier_call_chain(&blocking_lsm_notifier_chain,




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux