On Thu, Jan 19, 2023 at 03:25:08PM +0000, Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > On Thu, Jan 19, 2023, Isaku Yamahata wrote: > > On Sat, Jan 14, 2023 at 12:37:59AM +0000, > > Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > > > > On Fri, Dec 02, 2022, Chao Peng wrote: > > > > This patch series implements KVM guest private memory for confidential > > > > computing scenarios like Intel TDX[1]. If a TDX host accesses > > > > TDX-protected guest memory, machine check can happen which can further > > > > crash the running host system, this is terrible for multi-tenant > > > > configurations. The host accesses include those from KVM userspace like > > > > QEMU. This series addresses KVM userspace induced crash by introducing > > > > new mm and KVM interfaces so KVM userspace can still manage guest memory > > > > via a fd-based approach, but it can never access the guest memory > > > > content. > > > > > > > > The patch series touches both core mm and KVM code. I appreciate > > > > Andrew/Hugh and Paolo/Sean can review and pick these patches. Any other > > > > reviews are always welcome. > > > > - 01: mm change, target for mm tree > > > > - 02-09: KVM change, target for KVM tree > > > > > > A version with all of my feedback, plus reworked versions of Vishal's selftest, > > > is available here: > > > > > > git@xxxxxxxxxx:sean-jc/linux.git x86/upm_base_support > > > > > > It compiles and passes the selftest, but it's otherwise barely tested. There are > > > a few todos (2 I think?) and many of the commits need changelogs, i.e. it's still > > > a WIP. > > > > > > As for next steps, can you (handwaving all of the TDX folks) take a look at what > > > I pushed and see if there's anything horrifically broken, and that it still works > > > for TDX? > > > > > > Fuad (and pKVM folks) same ask for you with respect to pKVM. Absolutely no rush > > > (and I mean that). > > > > > > On my side, the two things on my mind are (a) tests and (b) downstream dependencies > > > (SEV and TDX). For tests, I want to build a lists of tests that are required for > > > merging so that the criteria for merging are clear, and so that if the list is large > > > (haven't thought much yet), the work of writing and running tests can be distributed. > > > > > > Regarding downstream dependencies, before this lands, I want to pull in all the > > > TDX and SNP series and see how everything fits together. Specifically, I want to > > > make sure that we don't end up with a uAPI that necessitates ugly code, and that we > > > don't miss an opportunity to make things simpler. The patches in the SNP series to > > > add "legacy" SEV support for UPM in particular made me slightly rethink some minor > > > details. Nothing remotely major, but something that needs attention since it'll > > > be uAPI. > > > > Although I'm still debuging with TDX KVM, I needed the following. > > kvm_faultin_pfn() is called without mmu_lock held. the race to change > > private/shared is handled by mmu_seq. Maybe dedicated function only for > > kvm_faultin_pfn(). > > Gah, you're not on the other thread where this was discussed[*]. Simply deleting > the lockdep assertion is safe, for guest types that rely on the attributes to > define shared vs. private, KVM rechecks the attributes under the protection of > mmu_seq. > > I'll get a fixed version pushed out today. > > [*] https://lore.kernel.org/all/Y8gpl+LwSuSgBFks@xxxxxxxxxx Now I have tdx kvm working. I've uploaded at the followings. It's rebased to v6.2-rc3. git@xxxxxxxxxx:yamahata/linux.git tdx/upm git@xxxxxxxxxx:yamahata/qemu.git tdx/upm kvm_mmu_do_page_fault() needs the following change. kvm_mem_is_private() queries mem_attr_array. kvm_faultin_pfn() also uses kvm_mem_is_private(). So the shared-private check in kvm_faultin_pfn() doesn't make sense. This change would belong to TDX KVM patches, though. diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 72b0da8e27e0..f45ac438bbf4 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -430,7 +430,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, .max_level = vcpu->kvm->arch.tdp_max_page_level, .req_level = PG_LEVEL_4K, .goal_level = PG_LEVEL_4K, - .is_private = kvm_mem_is_private(vcpu->kvm, cr2_or_gpa >> PAGE_SHIFT), + .is_private = kvm_is_private_gpa(vcpu->kvm, cr2_or_gpa), }; int r; -- Isaku Yamahata <isaku.yamahata@xxxxxxxxx>