On 28.02.24 14:34, Quentin Perret wrote:
On Wednesday 28 Feb 2024 at 14:00:50 (+0100), David Hildenbrand wrote:
To add a layer of paint to the shed, the usage of SIGBUS for
something that is really a permission access problem doesn't feel
SIGBUS stands for "BUS error (bad memory access)."
Which makes sense, if you try accessing something that can no longer be
accessed. It's now inaccessible. Even if it is temporarily.
Just like a page with an MCE error. Swapin errors. Etc. You cannot access
it.
It might be a permission problem on the pKVM side, but it's not the
traditional "permission problem" as in mprotect() and friends. You cannot
resolve that permission problem yourself. It's a higher entity that turned
that memory inaccessible.
Well that's where I'm not sure to agree. Userspace can, in fact, get
back all of that memory by simply killing the protected VM. With the
Right, but that would likely "wipe" the pages so they can be made accessible
again, right?
Yep, indeed.
That's the whole point why we are handing the pages over to the "higher
entity", and allow someone else (the VM) to turn them into a state where we
can no longer read them.
(if you follow the other discussion, it would actually be nice if we could
read them and would get encrypted content back, like s390x does; but that's
a different discussion and I assume pretty much out of scope :) )
Interesting, I'll read up. On a side note, I'm also considering adding a
guest-facing hypervisor interface to let the guest decide to opt out of
the hypervisor wipe as discussed above. That would be useful for a guest
that is shutting itself down (which could be cooperating with the host
Linux) and that knows it has erased its secrets. That is in general
difficult to do for an OS, but a simple approach could be to poison all
its memory (or maybe encrypt it?) before opting out of that wipe.
The hypervisor wipe is done in hypervisor context (obviously), which is
non-preemptible, so avoiding wiping (or encrypting) loads of memory
there is highly desirable. Also pKVM doesn't have a linear map of all
memory for security reasons, so we need to map/unmap the pages one by
one, which sucks as much as it sounds.
But yes, we're digressing, that is all for later :)
:) Sounds like an interesting optimization.
An alternative would be to remember in pKVM that a page needs a wipe
before reaccess. Once re-accessed by anybody (hypervisor or new guest),
it first has to be wiped by pKVM.
... but that also sounds complicated and similar requires the linear
map+unmap in pKVM page-by-page as they are reused. But at least a guest
shutdown would be faster.
approach suggested here, the guestmem pages are entirely accessible to
the host until they are attached to a running protected VM which
triggers the protection. It is very much userspace saying "I promise not
to touch these pages from now on" when it does that, in a way that I
personally find very comparable to the mprotect case. It is not some
other entity that pulls the carpet from under userspace's feet, it is
userspace being inconsistent with itself that causes the issue here, and
that's why SIGBUS feels kinda wrong as it tends to be used to report
external errors of some sort.
I recall that user space can also trigger SIGBUS when doing some
mmap()+truncate() thingies, and probably a bunch more, that could be fixed
up later.
Right, so that probably still falls into "there is no page" bucket
rather than the "there is a page that is already accounted against the
userspace process, but it doesn't have the permission to access it
bucket. But yes that's probably an infinite debate.
Yes, we should rather focus on the bigger idea: have inaccessible memory
that fails a pagefault instead of the mmap.
I don't see a problem with SIUGBUS here, but I do understand your view. I
consider the exact signal a minor detail, though.
appropriate. Allocating memory via guestmem and donating that to a
protected guest is a way for userspace to voluntarily relinquish access
permissions to the memory it allocated. So a userspace process violating
that could, IMO, reasonably expect a SEGV instead of SIGBUS. By the
point that signal would be sent, the page would have been accounted
against that userspace process, so not sure the paging examples that
were discussed earlier are exactly comparable. To illustrate that
differently, given that pKVM and Gunyah use MMU-based protection, there
is nothing architecturally that prevents a guest from sharing a page
back with Linux as RO.
Sure, then allow page faults that allow for reads and give a signal on write
faults.
In the scenario, it even makes more sense to not constantly require new
mmap's from user space just to access a now-shared page.
Note that we don't currently support this, so I
don't want to conflate this use case, but that hopefully makes it a
little more obvious that this is a "there is a page, but you don't
currently have the permission to access it" problem rather than "sorry
but we ran out of pages" problem.
We could user other signals, at least as the semantics are clear and it's
documented. Maybe SIGSEGV would be warranted.
I consider that a minor detail, though.
Requiring mmap()/munmap() dances just to access a page that is now shared
from user space sounds a bit suboptimal. But I don't know all the details of
the user space implementation.
Agreed, if we could save having to mmap() each page that gets shared
back that would be a nice performance optimization.
"mmap() the whole thing once and only access what you are supposed to
(> > > access" sounds reasonable to me. If you don't play by the rules, you get a
signal.
"... you get a signal, or maybe you don't". But yes I understand your
point, and as per the above there are real benefits to this approach so
why not.
What do we expect userspace to do when a page goes from shared back to
being guest-private, because e.g. the guest decides to unshare? Use
munmap() on that page? Or perhaps an madvise() call of some sort? Note
that this will be needed when starting a guest as well, as userspace
needs to copy the guest payload in the guestmem file prior to starting
the protected VM.
Let's assume we have the whole guest_memfd mapped exactly once in our
process, a single VMA.
When setting up the VM, we'll write the payload and then fire up the VM.
That will (I assume) trigger some shared -> private conversion.
When we want to convert shared -> private in the kernel, we would first
check if the page is currently mapped. If it is, we could try unmapping that
page using an rmap walk.
I had not considered that. That would most certainly be slow, but a well
behaved userspace process shouldn't hit it so, that's probably not a
problem...
If there really only is a single VMA that covers the page (or even mmaps
the guest_memfd), it should not be too bad. For example, any
fallocate(PUNCHHOLE) has to do the same, to unmap the page before
discarding it from the pagecache.
But of course, no rmap walk is always better.
Then, we'd make sure that there are really no other references and protect
against concurrent mapping of the page. Now we can convert the page to
private.
Right.
Alternatively, the shared->private conversion happens in the KVM vcpu
run loop, so we'd be in a good position to exit the VCPU_RUN ioctl with a
new exit reason saying "can't donate that page while it's shared" and
have userspace use MADVISE_DONTNEED or munmap, or whatever on the back
of that. But I tend to prefer the rmap option if it's workable as that
avoids adding new KVM userspace ABI.
As discussed in the sub-thread, that might still be required.
One could think about completely forbidding GUP on these mmap'ed
guest-memfds. But likely, there might be use cases in the future where
you want to use GUP on shared memory inside a guest_memfd.
(the iouring example I gave might currently not work because
FOLL_PIN|FOLL_LONGTERM|FOLL_WRITE only works on shmem+hugetlb, and
guest_memfd will likely not be detected as shmem; 8ac268436e6d contains
some details)
--
Cheers,
David / dhildenb