[RFC PATCH v2 0/5] mm: extend memfd with ability to create "secret" memory areas

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Mike Rapoport <rppt@xxxxxxxxxxxxx>

Hi,

This is a second version of "secret" mappings implementation backed by a
file descriptor. 

The file descriptor is created using memfd_create() syscall with a new
MFD_SECRET flag. The file descriptor should be configured using ioctl() to
define the desired protection and then mmap() of the fd will create a
"secret" memory mapping. The pages in that mapping will be marked as not
present in the direct map and will have desired protection bits set in the
user page table. For instance, current implementation allows uncached
mappings.

Hiding secret memory mappings behind an anonymous file allows (ab)use of
the page cache for tracking pages allocated for the "secret" mappings as
well as using address_space_operations for e.g. page migration callbacks.

The anonymous file may be also used implicitly, like hugetlb files, to
implement mmap(MAP_SECRET) and use the secret memory areas with "native" mm
ABIs.

As the fragmentation of the direct map was one of the major concerns raised
during the previous postings, I've added an amortizing cache of PMD-size
pages to each file descriptor and an ability to reserve large chunks of the
physical memory at boot time and then use this memory as an allocation pool
for the secret memory areas.

In addition, I've tried to find some numbers that show the benefit of using
larger pages in the direct map, but I couldn't find anything so I've run a
couple of benchmarks from phoronix-test-suite on my laptop (i7-8650U with
32G RAM).

I've tested three variants: the default with 28G of the physical memory
covered with 1G pages, then I disabled 1G pages using "nogbpages" in the
kernel command line and at last I've forced the entire direct map to use 4K
pages using a simple patch to arch/x86/mm/init.c.
I've made runs of the benchmarks with SSD and tmpfs.

Surprisingly, the results does not show huge advantage for large pages. For
instance, here the results for kernel build with 'make -j8', in seconds:

                        |  1G    |  2M    |  4K
------------------------+--------+--------+---------
ssd, mitigations=on	| 308.75 | 317.37 | 314.9 
ssd, mitigations=off	| 305.25 | 295.32 | 304.92 
ram, mitigations=on	| 301.58 | 322.49 | 306.54 
ram, mitigations=off	| 299.32 | 288.44 | 310.65

All the results I have are available at [1].
If anybody is interested in plain text, please let me know.

[1] https://docs.google.com/spreadsheets/d/1tdD-cu8e93vnfGsTFxZ5YdaEfs2E1GELlvWNOGkJV2U/edit?usp=sharing

Mike Rapoport (5):
  mm: make HPAGE_PxD_{SHIFT,MASK,SIZE} always available
  mmap: make mlock_future_check() global
  mm: extend memfd with ability to create "secret" memory areas
  mm: secretmem: use PMD-size pages to amortize direct map fragmentation
  mm: secretmem: add ability to reserve memory at boot

 include/linux/huge_mm.h    |  10 +-
 include/linux/memfd.h      |   9 +
 include/uapi/linux/magic.h |   1 +
 include/uapi/linux/memfd.h |   6 +
 mm/Kconfig                 |   3 +
 mm/Makefile                |   1 +
 mm/internal.h              |   3 +
 mm/memfd.c                 |  10 +-
 mm/mmap.c                  |   5 +-
 mm/secretmem.c             | 445 +++++++++++++++++++++++++++++++++++++
 10 files changed, 480 insertions(+), 13 deletions(-)
 create mode 100644 mm/secretmem.c


base-commit: 7c30b859a947535f2213277e827d7ac7dcff9c84
-- 
2.26.2




[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux