Re: [PATCH] binderfs: implement sysctls

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Dec 21, 2018 at 03:12:42PM +0100, Christian Brauner wrote:
> On Fri, Dec 21, 2018 at 02:55:09PM +0100, Greg KH wrote:
> > On Fri, Dec 21, 2018 at 02:39:09PM +0100, Christian Brauner wrote:
> > > This implements three sysctls that have very specific goals:
> > 
> > Ick, why?
> > 
> > What are these going to be used for?  Who will "control" them?  As you
> 
> Only global root in the initial user namespace. See the reasons below. :)
> 
> > are putting them in the "global" namespace, that feels like something
> > that binderfs was trying to avoid in the first place.
> 
> There are a couple of reason imho:
> - Global root needs a way to restrict how many binder devices can be
>   allocated across all user + ipc namespace pairs.
>   One obvious reason is that otherwise userns root in a non-initial user
>   namespace can allocate a huge number of binder devices (pick a random
>   number say 10.000) and use up a lot of kernel memory.

Root can do tons of other bad things too, why are you picking on
binderfs here?  :)

>   In addition they can pound on the binder.c code causing a lot of
>   contention for the remaining global lock in there.

That's the problem of that container, don't let it do that.  Or remove
the global lock :)

>   We should let global root explicitly restrict non-initial namespaces
>   in this respect. Imho, that's just good security design. :)

If you do not trust your container enough to have it properly allocate
the correct binder resources, then perhaps you shouldn't be allowing it
to allocate any resources at all?

> - The reason for having a number of reserved devices is when the initial
>   binderfs mount needs to bump the number of binder devices after the
>   initial allocation done during say boot (e.g. it could've removed
>   devices and wants to reallocate new ones but all binder minor numbers
>   have been given out or just needs additional devices). By reserving an
>   initial pool of binder devices this can be easily accounted for and
>   future proofs userspace. This is to say: global root in the initial
>   userns + ipcns gets dibs on however many devices it wants. :)

binder devices do not "come and go" at runtime, you need to set them up
initially and then all is fine.  So there should never be a need for the
"global" instance to need "more" binder devices once it is up and
running.  So I don't see what you are really trying to solve here.

You seem to be trying to protect the system from the container you just
gave root to and trusted it with creating its own binder instances.
If you do not trust it to create binder instances then do not allow it
to create binder instances!  :)

> - The fact that we have a single shared pool of binder device minor
>   numbers for all namespaces imho makes it necessary for the global root
>   user in the initial ipc + user namespace to manage device allocation
>   and delegation.

You are managing the allocation, you are giving who ever asks for one a
device.  If you run out of devices, oops, you run out of devices, that's
it.  Are you really ever going to run out of a major's number of binder
devices?

> The binderfs sysctl stuff is really small code-wise and adds a lot of
> security without any performance impact on the code itself. So we
> actually very strictly adhere to the requirement to not blindly
> sacrifice performance for security. :)

But you are adding a brand new user/kernel api by emulating one that is
very old and not the best at all, to try to protect from something that
seems like you can't really "protect" from in the first place.

You now have a mis-match of sysctls, ioctls and file operations all
working on the same logical thing.  And all interacting in different and
uncertian ways.  Are you sure that's wise?

If the binderfs code as-is isn't "safe enough" to use without this, then
we need to revisit it before someone starts to use it...

thanks,

greg k-h
_______________________________________________
devel mailing list
devel@xxxxxxxxxxxxxxxxxxxxxx
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel



[Index of Archives]     [Linux Driver Backports]     [DMA Engine]     [Linux GPIO]     [Linux SPI]     [Video for Linux]     [Linux USB Devel]     [Linux Coverity]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux