On Fri, Feb 07, 2020 at 08:39:09PM +0300, Kirill A. Shutemov wrote: > On Thu, Feb 06, 2020 at 06:59:00PM +0200, Mike Rapoport wrote: > > > > Restricted mappings in the kernel mode may improve mitigation of hardware > > speculation vulnerabilities and minimize the damage exploitable kernel bugs > > can cause. > > > > There are several ongoing efforts to use restricted address spaces in > > Linux kernel for various use cases: > > * speculation vulnerabilities mitigation in KVM [1] > > * support for memory areas visible only in a single owning context, or more > > generically, a memory areas with more restrictive protection that the > > defaults ("secret" memory) [2], [3], [4] > > * hardening of the Linux containers [ no reference yet :) ] > > > > Last year we had vague ideas and possible directions, this year we have > > several real challenges and design decisions we'd like to discuss: > > > > * "Secret" memory userspace APIs > > > > Should such API follow "native" MM interfaces like mmap(), mprotect(), > > madvise() or it would be better to use a file descriptor , e.g. like > > memfd-create does? > > I don't really see a point in such file-descriptor. It suppose to be very > private secret data. What functionality that provide a file descriptor do > you see valuable in this scenario? > > File descriptor makes it easier to spill the secrets to other process: over > fork(), UNIX socket or via /proc/PID/fd/. On the other hand it is may be desired to share a secret between several processes. Then UNIX socket or fork() actually become handy. > > MM "native" APIs would require VM_something flag and probably a page flag > > or page_ext. With file-descriptor VM_SPECIAL and custom implementation of > > .mmap() and .fault() would suffice. On the other hand, mmap() and > > mprotect() seem better fit semantically and they could be more easily > > adopted by the userspace. > > You mix up implementation and interface. You can provide an interface which > doesn't require a file descriptor, but still use a magic file internally to > the VMA distinct. If I understand correctly, if we go with mmap(MAP_SECRET) example, the mmap() would implicitly create a magic file with its .mmap() and .fault() implementing the protection? That's a possibility. But then, if we already have a file, why not let user get a handle for it and allow fine grained control over its sharing between processes? > > * Direct/linear map fragmentation > > > > Whenever we want to drop some mappings from the direct map or even change > > the protection bits for some memory area, the gigantic and huge pages > > that comprise the direct map need to be broken and there's no THP for the > > kernel page tables to collapse them back. Moreover, the existing API > > defined in <asm/set_memory.h> by several architectures do not really > > presume it would be widely used. > > > > For the "secret" memory use-case the fragmentation can be minimized by > > caching large pages, use them to satisfy smaller "secret" allocations and > > than collapse them back once the "secret" memory is freed. Another > > possibility is to pre-allocate physical memory at boot time. > > I would rather go with pre-allocation path. At least at first. We always > can come up with more dynamic and complicated solution later if the > interface would be wildly adopted. We still must manage the "secret" allocations, so I don't think that the dynamic solution will be much more complicated. > > Yet another idea is to make page allocator aware of the direct map layout. > > > > * Kernel page table management > > > > Currently we presume that only one kernel page table exists (well, > > mostly) and the page table abstraction is required only for the user page > > tables. As such, we presume that 'page table == struct mm_struct' and the > > mm_struct is used all over by the operations that manage the page tables. > > > > The management of the restricted address space in the kernel requires > > ability to create, update and remove kernel contexts the same way we do > > for the userspace. > > > > One way is to overload the mm_struct, like EFI and text poking did. But > > it is quite an overkill, because most of the mm_struct contains > > information required to manage user mappings. > > In what way is it overkill? Just memory overhead? How many of such > contexts do you expect to see in the system? Well, memory overhead is not that big, but it'd not negligible. For the KVM ASI usescase, for instance, there will be at least as much contexts as running VMs. We also have thoughts about how to make namespaces use restricted address spaces, for this usecase there will be quite a lot of such contexts. Besides, it does not feel right to have the mm_struct to represent a page table. > > My suggestion is to introduce a first class abstraction for the page > > table and then it could be used in the same way for user and kernel > > context management. For now I have a very basic POC that slitted several > > fields from the mm_struct into a new 'struct pg_table' [5]. This new > > abstraction can be used e.g. by PTI implementation of the page table > > cloning and the KVM ASI work. > > > > > > [1] https://lore.kernel.org/lkml/1557758315-12667-1-git-send-email-alexandre.chartre@xxxxxxxxxx/ > > [2] https://lore.kernel.org/lkml/20190612170834.14855-1-mhillenb@xxxxxxxxx/ > > [3] https://lore.kernel.org/lkml/1572171452-7958-1-git-send-email-rppt@xxxxxxxxxx/ > > [4] https://lore.kernel.org/lkml/20200130162340.GA14232@rapoport-lnx/ > > [5] https://git.kernel.org/pub/scm/linux/kernel/git/rppt/linux.git/log/?h=pg_table/v0.0 > > > > -- > Kirill A. Shutemov -- Sincerely yours, Mike.