Re: [PATCH v3 seccomp 2/5] seccomp/cache: Add "emulator" to check if filter is constant allow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 01, 2020 at 06:52:50AM -0500, YiFei Zhu wrote:
> On Wed, Sep 30, 2020 at 5:40 PM Kees Cook <keescook@xxxxxxxxxxxx> wrote:
> > The guiding principle with seccomp's designs is to always make things
> > _more_ restrictive, never less. While we can never escape the
> > consequences of having seccomp_is_const_allow() report the wrong
> > answer, we can at least follow the basic principles, hopefully
> > minimizing the impact.
> >
> > When the bitmap starts with "always allowed" and we only flip it towards
> > "run full filters", we're only ever making things more restrictive. If
> > we instead go from "run full filters" towards "always allowed", we run
> > the risk of making things less restrictive. For example: a process that
> > maliciously adds a filter that the emulator mistakenly evaluates to
> > "always allow" doesn't suddenly cause all the prior filters to stop running.
> > (i.e. this isolates the flaw outcome, and doesn't depend on the early
> > "do not emulate if we already know we have to run filters" case before
> > the emulation call: there is no code path that allows the cache to
> > weaken: it can only maintain it being wrong).
> >
> > Without any seccomp filter installed, all syscalls are "always allowed"
> > (from the perspective of the seccomp boundary), so the default of the
> > cache needs to be "always allowed".
> 
> I cannot follow this. If a 'process that maliciously adds a filter
> that the emulator mistakenly evaluates to "always allow" doesn't
> suddenly cause all the prior filters to stop running', hence, you
> want, by default, the cache to be as transparent as possible. You
> would lift the restriction if and only if you are absolutely sure it
> does not cause an impact.

Yes, right now, the v3 code pattern is entirely safe.

> 
> In this patch, if there are prior filters, it goes through this logic:
> 
>         if (bitmap_prev && !test_bit(nr, bitmap_prev))
>             continue;
> 
> Hence, if the malicious filter were to happen, and prior filters were
> supposed to run, then seccomp_is_const_allow is simply not invoked --
> what it returns cannot be used maliciously by an adversary.

Right, but we depend on that test always doing the correct thing (and
continuing to do so into the future). I'm looking at this from the
perspective of future changes, maintenance, etc. I want the actions to
match the design principles as closely as possible so that future
evolutions of the code have lower risk to bugs causing security
failures. Right now, the code is simple. I want to design this so that
when it is complex, it will still fail toward safety in the face of
bugs.

> >         if (bitmap_prev) {
> >                 /* The new filter must be as restrictive as the last. */
> >                 bitmap_copy(bitmap, bitmap_prev, bitmap_size);
> >         } else {
> >                 /* Before any filters, all syscalls are always allowed. */
> >                 bitmap_fill(bitmap, bitmap_size);
> >         }
> >
> >         for (nr = 0; nr < bitmap_size; nr++) {
> >                 /* No bitmap change: not a cacheable action. */
> >                 if (!test_bit(nr, bitmap_prev) ||
> >                         continue;
> >
> >                 /* No bitmap change: continue to always allow. */
> >                 if (seccomp_is_const_allow(fprog, &sd))
> >                         continue;
> >
> >                 /* Not a cacheable action: always run filters. */
> >                 clear_bit(nr, bitmap);
> 
> I'm not strongly against this logic. I just feel unconvinced that this
> is any different with a slightly increased complexity.

I'd prefer this way because for the loop, the tests, and the results only
make the bitmap more restrictive. The worst thing a bug in here can do is
leave the bitmap unchanged (which is certainly bad), but it can't _undo_
an earlier restriction.

The proposed loop's leading test_bit() becomes only an optimization,
rather than being required for policy enforcement.

In other words, I prefer:

	inherit all prior prior bitmap restrictions
	for all syscalls
		if this filter not restricted
			continue
		set bitmap restricted

	within this loop (where the bulk of future logic may get added),
	the worse-case future bug-induced failure mode for the syscall
	bitmap is "skip *this* filter".


Instead of:

	set bitmap all restricted
	for all syscalls
		if previous bitmap not restricted and
		   filter not restricted
			set bitmap unrestricted

	within this loop the worst-case future bug-induced failure mode
	for the syscall bitmap is "skip *all* filters".




Or, to reword again, this:

	retain restrictions from previous caching decisions
	for all syscalls
		[evaluate this filter, maybe continue]
		set restricted

instead of:

	set new cache to all restricted
	for all syscalls
		[evaluate prior cache and this filter, maybe continue]
		set unrestricted

I expect the future code changes for caching to be in the "evaluate"
step, so I'd like the code designed to make things MORE restrictive not
less from the start, and remove any prior cache state tests from the
loop.

At the end of the day I believe changing the design like this now lays
the groundwork to the caching mechanism being more robust against having
future bugs introduce security flaws.

-- 
Kees Cook



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux