Re: [PATCH v1 2/3] selftests/memfd_secret: add vmsplice() test

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 26.03.24 13:32, David Hildenbrand wrote:
On 26.03.24 07:17, Mike Rapoport wrote:
Hi David,

On Mon, Mar 25, 2024 at 02:41:13PM +0100, David Hildenbrand wrote:
Let's add a simple reproducer for a scneario where GUP-fast could succeed
on secretmem folios, making vmsplice() succeed instead of failing. The
reproducer is based on a reproducer [1] by Miklos Szeredi.

Perform the ftruncate() only once, and check the return value.

For some reason, vmsplice() reliably fails (making the test succeed) when
we move the test_vmsplice() call after test_process_vm_read() /
test_ptrace().

That's because ftruncate() call was in test_remote_access() and you need it
to mmap secretmem.

I don't think that's the reason. I reshuffled the code a couple of times
without luck.

And in fact, even executing the vmsplice() test twice results in the
second iteration succeeding on an old kernel (6.7.4-200.fc39.x86_64).

ok 1 mlock limit is respected
ok 2 file IO is blocked as expected
not ok 3 vmsplice is blocked as expected
ok 4 vmsplice is blocked as expected
ok 5 process_vm_read is blocked as expected
ok 6 ptrace is blocked as expected

Note that the mmap()+memset() succeeded. So the secretmem pages should be in the page table.


Even weirder, if I simply mmap()+memset()+munmap() secretmem *once*, the test passes

diff --git a/tools/testing/selftests/mm/memfd_secret.c b/tools/testing/selftests/mm/memfd_secret.c
index 0acbdcf8230e..7a973ec6ac8f 100644
--- a/tools/testing/selftests/mm/memfd_secret.c
+++ b/tools/testing/selftests/mm/memfd_secret.c
@@ -96,6 +96,14 @@ static void test_vmsplice(int fd)
                  return;
          }
+ mem = mmap(NULL, page_size, prot, mode, fd, 0);
+       if (mem == MAP_FAILED) {
+               fail("Unable to mmap secret memory\n");
+               goto close_pipe;
+       }
+       memset(mem, PATTERN, page_size);
+       munmap(mem, page_size);
+
          mem = mmap(NULL, page_size, prot, mode, fd, 0);
          if (mem == MAP_FAILED) {
                  fail("Unable to mmap secret memory\n");

ok 1 mlock limit is respected
ok 2 file IO is blocked as expected
ok 3 vmsplice is blocked as expected
ok 4 process_vm_read is blocked as expected
ok 5 ptrace is blocked as expected


... could it be that munmap()+mmap() will end up turning these pages into LRU pages?

Okay, now I am completely confused.

secretmem_fault() calls filemap_add_folio(), which should turn this into an LRU page.

So secretmem pages should always be LRU pages. .. unless we're batching in the LRU cache and haven't done the lru_add_drain() ...

And likely, the munmap() will drain the lru cache and turn the page into an LRU page.

Okay, I'll go make sure if that's the case. If so, relying on the page being LRU vs. not LRU in GUP-fast is unreliable and shall be dropped.

--
Cheers,

David / dhildenb





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux