On Sat, Jul 20, 2024 at 02:43:41PM +0300, Jarkko Sakkinen wrote: > On Sat Jul 20, 2024 at 4:59 AM EEST, Andy Lutomirski wrote: > > > On Jul 18, 2024, at 8:22 PM, Mickaël Salaün <mic@xxxxxxxxxxx> wrote: > > > > > > On Thu, Jul 18, 2024 at 09:02:56AM +0800, Andy Lutomirski wrote: > > >>>> On Jul 17, 2024, at 6:01 PM, Mickaël Salaün <mic@xxxxxxxxxxx> wrote: > > >>> > > >>> On Wed, Jul 17, 2024 at 09:26:22AM +0100, Steve Dower wrote: > > >>>>> On 17/07/2024 07:33, Jeff Xu wrote: > > >>>>> Consider those cases: I think: > > >>>>> a> relying purely on userspace for enforcement does't seem to be > > >>>>> effective, e.g. it is trivial to call open(), then mmap() it into > > >>>>> executable memory. > > >>>> > > >>>> If there's a way to do this without running executable code that had to pass > > >>>> a previous execveat() check, then yeah, it's not effective (e.g. a Python > > >>>> interpreter that *doesn't* enforce execveat() is a trivial way to do it). > > >>>> > > >>>> Once arbitrary code is running, all bets are off. So long as all arbitrary > > >>>> code is being checked itself, it's allowed to do things that would bypass > > >>>> later checks (and it's up to whoever audited it in the first place to > > >>>> prevent this by not giving it the special mark that allows it to pass the > > >>>> check). > > >>> > > >>> Exactly. As explained in the patches, one crucial prerequisite is that > > >>> the executable code is trusted, and the system must provide integrity > > >>> guarantees. We cannot do anything without that. This patches series is > > >>> a building block to fix a blind spot on Linux systems to be able to > > >>> fully control executability. > > >> > > >> Circling back to my previous comment (did that ever get noticed?), I > > > > > > Yes, I replied to your comments. Did I miss something? > > > > I missed that email in the pile, sorry. I’ll reply separately. > > > > > > > >> don’t think this is quite right: > > >> > > >> https://lore.kernel.org/all/CALCETrWYu=PYJSgyJ-vaa+3BGAry8Jo8xErZLiGR3U5h6+U0tA@xxxxxxxxxxxxxx/ > > >> > > >> On a basic system configuration, a given path either may or may not be > > >> executed. And maybe that path has some integrity check (dm-verity, > > >> etc). So the kernel should tell the interpreter/loader whether the > > >> target may be executed. All fine. > > >> > > >> But I think the more complex cases are more interesting, and the > > >> “execute a program” process IS NOT BINARY. An attempt to execute can > > >> be rejected outright, or it can be allowed *with a change to creds or > > >> security context*. It would be entirely reasonable to have a policy > > >> that allows execution of non-integrity-checked files but in a very > > >> locked down context only. > > > > > > I guess you mean to transition to a sandbox when executing an untrusted > > > file. This is a good idea. I talked about role transition in the > > > patch's description: > > > > > > With the information that a script interpreter is about to interpret a > > > script, an LSM security policy can adjust caller's access rights or log > > > execution request as for native script execution (e.g. role transition). > > > This is possible thanks to the call to security_bprm_creds_for_exec(). > > > > … > > > > > This patch series brings the minimal building blocks to have a > > > consistent execution environment. Role transitions for script execution > > > are left to LSMs. For instance, we could extend Landlock to > > > automatically sandbox untrusted scripts. > > > > I’m not really convinced. There’s more to building an API that > > enables LSM hooks than merely sticking the hook somewhere in kernel > > code. It needs to be a defined API. If you call an operation “check”, > > then people will expect it to check, not to change the caller’s > > credentials. And people will mess it up in both directions (e.g. > > callers will call it and then open try to load some library that they > > should have loaded first, or callers will call it and forget to close > > fds first. > > > > And there should probably be some interaction with dumpable as well. > > If I “check” a file for executability, that should not suddenly allow > > someone to ptrace me? > > > > And callers need to know to exit on failure, not carry on. > > > > > > More concretely, a runtime that fully opts in to this may well "check" > > multiple things. For example, if I do: > > > > $ ld.so ~/.local/bin/some_program (i.e. I literally execve ld.so) > > > > then ld.so will load several things: > > > > ~/.local/bin/some_program > > libc.so > > other random DSOs, some of which may well be in my home directory > > What would really help to comprehend this patch set would be a set of > test scripts, preferably something that you can run easily with > BuildRoot or similar. > > Scripts would demonstrate the use cases for the patch set. Then it > would be easier to develop scripts that would underline the corner > cases. I would keep all this out of kselftest shenanigans for now. I'll include a toy script interpreter with the next patch series. This one was an RFC. > > I feel that the patch set is hovering in abstractions with examples > that you cannot execute. > > I added the patches to standard test CI hack: > > https://codeberg.org/jarkko/linux-tpmdd-test > > But after I booted up a kernel I had no idea what to do with it. And > all this lenghty discussion makes it even more confusing. You can run the tests in the CI. > > Please find some connection to the real world before sending any new > version of this (e.g. via test scripts). I think this should not be > pulled before almost anyone doing kernel dev can comprehend the "gist" > at least in some reasonable level. You'll find in this patch series (cover letter, patch description, and comments) connection to the real world. :) The next patch series should take into account the current discussions. > > BR, Jarkko