Hi Jason, On Wed, Jun 19, 2024 at 12:51 PM Jason Gunthorpe <jgg@xxxxxxxxxx> wrote: > > On Wed, Jun 19, 2024 at 10:11:35AM +0100, Fuad Tabba wrote: > > > To be honest, personally (speaking only for myself, not necessarily > > for Elliot and not for anyone else in the pKVM team), I still would > > prefer to use guest_memfd(). I think that having one solution for > > confidential computing that rules them all would be best. But we do > > need to be able to share memory in place, have a plan for supporting > > huge pages in the near future, and migration in the not-too-distant > > future. > > I think using a FD to control this special lifetime stuff is > dramatically better than trying to force the MM to do it with struct > page hacks. > > If you can't agree with the guest_memfd people on how to get there > then maybe you need a guest_memfd2 for this slightly different special > stuff instead of intruding on the core mm so much. (though that would > be sad) > > We really need to be thinking more about containing these special > things and not just sprinkling them everywhere. I agree that we need to agree :) This discussion has been going on since before LPC last year, and the consensus from the guest_memfd() folks (if I understood it correctly) is that guest_memfd() is what it is: designed for a specific type of confidential computing, in the style of TDX and CCA perhaps, and that it cannot (or will not) perform the role of being a general solution for all confidential computing. > > The approach we're taking with this proposal is to instead restrict > > the pinning of protected memory. If the host kernel can't pin the > > memory, then a misbehaving process can't trick the host into accessing > > it. > > If the memory can't be accessed by the CPU then it shouldn't be mapped > into a PTE in the first place. The fact you made userspace faults > (only) work is nifty but still an ugly hack to get around the fact you > shouldn't be mapping in the first place. > > We already have ZONE_DEVICE/DEVICE_PRIVATE to handle exactly this > scenario. "memory" that cannot be touched by the CPU but can still be > specially accessed by enlightened components. > > guest_memfd, and more broadly memfd based instead of VMA based, memory > mapping in KVM is a similar outcome to DEVICE_PRIVATE. > > I think you need to stay in the world of not mapping the memory, one > way or another. As I mentioned earlier, that's my personal preferred option. > > > 3) How can we be sure we don't need other long-term pins (IOMMUs?) in > > > the future? > > > > I can't :) > > AFAICT in the pKVM model the IOMMU has to be managed by the > hypervisor.. I realized that I misunderstood this. At least speaking for pKVM, we don't need other long term pins as long as the memory is private. The exclusive pin is dropped when the memory is shared. > > We are gating it behind a CONFIG flag :) > > > > Also, since pin is already overloading the refcount, having the > > exclusive pin there helps in ensuring atomic accesses and avoiding > > races. > > Yeah, but every time someone does this and then links it to a uAPI it > becomes utterly baked in concrete for the MM forever. I agree. But if we can't modify guest_memfd() to fit our needs (pKVM, Gunyah), then we don't really have that many other options. Thanks! /fuad > Jason