Hi, Sorry to come to this so late; I've been meaning to provide feedback on this for a while but have been indisposed for a bit due to an injury. On Fri, Sep 25, 2020 at 11:50:29AM +0200, Peter Zijlstra wrote: > On Fri, Sep 25, 2020 at 11:00:30AM +0200, David Hildenbrand wrote: > > On 25.09.20 09:41, Peter Zijlstra wrote: > > > On Thu, Sep 24, 2020 at 04:29:03PM +0300, Mike Rapoport wrote: > > >> From: Mike Rapoport <rppt@xxxxxxxxxxxxx> > > >> > > >> Removing a PAGE_SIZE page from the direct map every time such page is > > >> allocated for a secret memory mapping will cause severe fragmentation of > > >> the direct map. This fragmentation can be reduced by using PMD-size pages > > >> as a pool for small pages for secret memory mappings. > > >> > > >> Add a gen_pool per secretmem inode and lazily populate this pool with > > >> PMD-size pages. > > > > > > What's the actual efficacy of this? Since the pmd is per inode, all I > > > need is a lot of inodes and we're in business to destroy the directmap, > > > no? > > > > > > Afaict there's no privs needed to use this, all a process needs is to > > > stay below the mlock limit, so a 'fork-bomb' that maps a single secret > > > page will utterly destroy the direct map. > > > > > > I really don't like this, at all. > > > > As I expressed earlier, I would prefer allowing allocation of secretmem > > only from a previously defined CMA area. This would physically locally > > limit the pain. > > Given that this thing doesn't have a migrate hook, that seems like an > eminently reasonable contraint. Because not only will it mess up the > directmap, it will also destroy the ability of the page-allocator / > compaction to re-form high order blocks by sprinkling holes throughout. > > Also, this is all very close to XPFO, yet I don't see that mentioned > anywhere. Agreed. I think if we really need something like this, something between XPFO and DEBUG_PAGEALLOC would be generally better, since: * Secretmem puts userspace in charge of kernel internals (AFAICT without any ulimits?), so that seems like an avenue for malicious or buggy userspace to exploit and trigger DoS, etc. The other approaches leave the kernel in charge at all times, and it's a system-level choice which is easier to reason about and test. * Secretmem interaction with existing ABIs is unclear. Should uaccess primitives work for secretmem? If so, this means that it's not valid to transform direct uaccesses in syscalls etc into accesses via the linear/direct map. If not, how do we prevent syscalls? The other approaches are clear that this should always work, but the kernel should avoid mappings wherever possible. * The uncached option doesn't work in a number of situations, such as systems which are purely cache coherent at all times, or where the hypervisor has overridden attributes. The kernel cannot even know that whther this works as intended. On its own this doens't solve a particular problem, and I think this is a solution looking for a problem. ... and fundamentally, this seems like a "more security, please" option that is going to be abused, since everyone wants security, regardless of how we say it *should* be used. The few use-cases that may make sense (e.g. protection of ketys and/or crypto secrrets), aren't going to be able to rely on this (since e.g. other uses may depelete memory pools), so this is going to be best-effort. With all that in mind, I struggle to beleive that this is going to be worth the maintenance cost (e.g. with any issues arising from uaccess, IO, etc). Overall, I would prefer to not see this syscall in the kernel. > Further still, it has this HAVE_SECRETMEM_UNCACHED nonsense which is > completely unused. I'm not at all sure exposing UNCACHED to random > userspace is a sane idea. I agree the uncached stuff should be removed. It is at best misleading since the kernel can't guarantee it does what it says, I think it's liable to lead to issues in future (e.g. since it can cause memory operations to raise different exceptions relative to what they can today), and as above it seems like a solution looking for a problem. Thanks, Mark.