Re: [RESEND RFC PATCH 0/5] Remote mapping

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04/09/20 22:34, Andy Lutomirski wrote:
> On Sep 4, 2020, at 1:09 PM, Paolo Bonzini <pbonzini@xxxxxxxxxx> wrote:
>> On 04/09/20 21:39, Andy Lutomirski wrote:
>>> I'm a little concerned
>>> that it's actually too clever and that maybe a more
>>> straightforward solution should be investigated.  I personally 
>>> rather dislike the KVM model in which the guest address space
>>> mirrors the host (QEMU) address space rather than being its own
>>> thing.  In particular, the current model means that
>>> extra-special-strange mappings like SEV-encrypted memory are
>>> required to be present in the QEMU page tables in order for the
>>> guest to see them. (If I had noticed that last bit before it went
>>> upstream, I would have NAKked it.  I would still like to see it
>>> deprecated and ideally eventually removed from the kernel.  We
>>> have absolutely no business creating incoherent mappings like
>>> this.)
>> 
>> NACK first and ask second, right Andy?  I see that nothing has
>> changed since Alan Cox left Linux.
> 
> NACKs are negotiable.  And maybe someone can convince me that the SEV
> mapping scheme is reasonable, but I would be surprised.

So why say NACK?  Any half-decent maintainer would hold on merging the
patches at least until the discussion is over.  Also I suppose any
deprecation proposal should come with a description of an alternative.

Anyway, for SEV the problem is DMA.  There is no way to know in advance
which memory the guest will use for I/O; it can change at any time and
the same host-physical address can even be mapped both as C=0 and C=1 by
the guest.  There's no communication protocol between the guest and the
host to tell the host _which_ memory should be mapped in QEMU.  (One was
added to support migration, but that doesn't even work with SEV-ES
processors where migration is planned to happen mostly with help from
the guest, either in the firmware or somewhere else).

But this is a digression.  (If you would like to continue the discussion
please trim the recipient list and change the subject).

> Regardless, you seem to be suggesting that you want to have enclave
> VMs in which the enclave can see some memory that the parent VM can’t
> see. How does this fit into the KVM mapping model?  How does this
> remote mapping mechanism help?  Do you want QEMU to have that memory
> mapped in its own pagetables?

There are three processes:

- the manager, which is the parent of the VMs and uses the pidfd_mem
system call

- the primary VM

- the enclave VM(s)

The primary VM and the enclave VM(s) would each get a different memory
access file descriptor.  QEMU would treat them no differently from any
other externally-provided memory backend, say hugetlbfs or memfd, so
yeah they would be mmap-ed to userspace and the host virtual address
passed as usual to KVM.

Enclave VMs could be used to store secrets and perform crypto for
example.  The enclave is measured at boot, any keys or other stuff it
needs can be provided out-of-band from the manager

The manager can decide at any time to hide some memory from the parent
VM (in order to give it to an enclave).  This would actually be done on
 request of the parent VM itself, and QEMU would probably be so kind as
to replace the "hole" left in the guest memory with zeroes.  But QEMU is
untrusted, so the manager cannot rely on QEMU behaving well.  Hence the
 privilege separation model that was implemented here.

Actually Amazon has already created something like that and Andra-Irina
Paraschiv has posted patches on the list for this.  Their implementation
is not open source, but this pidfd-mem concept is something that Andra,
Alexander Graf and I came up with as a way to 1) reimplement the feature
upstream and 2) satisfy Bitdefender's need for memory introspection 3)
add what seemed a useful interface anyway, for example to replace
PTRACE_{PEEK,POKE}DATA.  Though (3) would only need pread/pwrite, not
mmap which adds a lot of the complexity.

> As it stands, the way that KVM memory mappings are created seems to
> be convenient, but it also seems to be resulting in increasing
> bizarre userspace mappings.  At what point is the right solution to
> decouple KVM’s mappings from QEMU’s?

So what you are suggesting is that KVM manages its own address space
instead of host virtual addresses (and with no relationship to host
virtual addresses, it would be just a "cookie")?  It would then need a
couple ioctls to mmap/munmap (creating and deleting VMAs) into the
address space, and those cookies would be passed to
KVM_SET_USER_MEMORY_REGION.  QEMU would still need access to these VMAs,
would it mmap a file descriptor provided by KVM?  All in all the
implementation seems quite complex, and I don't understand why it would
avoid incoherent SEV mappings; what am I missing?

Paolo




[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux