On Fri, Jun 21, 2024, Michael Roth wrote: > @@ -3939,6 +3944,67 @@ static int sev_snp_ap_creation(struct vcpu_svm *svm) > return ret; > } > > +static int snp_handle_guest_req(struct vcpu_svm *svm, gpa_t req_gpa, gpa_t resp_gpa) > +{ > + struct sev_data_snp_guest_request data = {0}; > + struct kvm *kvm = svm->vcpu.kvm; > + kvm_pfn_t req_pfn, resp_pfn; > + sev_ret_code fw_err = 0; > + int ret; > + > + if (!sev_snp_guest(kvm) || !PAGE_ALIGNED(req_gpa) || !PAGE_ALIGNED(resp_gpa)) > + return -EINVAL; > + > + req_pfn = gfn_to_pfn(kvm, gpa_to_gfn(req_gpa)); This is going to sound odd, but I think we should use gfn_to_page(), i.e. require that the page be a refcounted page so that it can be pinned. Long story short, one of my learnings from the whole non-refcounted pages saga is that doing DMA to unpinned pages outside of mmu_lock is wildly unsafe, and for all intents and purposes the ASP is a device doing DMA. I'm also pretty sure KVM should actually *pin* pages, as in FOLL_PIN, but I'm ok tackling that later. For now, using gfn_to_pages() would avoid creating ABI (DMA to VM_PFNMAP and/or VM_MIXEDMAP memory) that KVM probably doesn't want to support in the long term. [*] https://lore.kernel.org/all/20240229025759.1187910-1-stevensd@xxxxxxxxxx > + if (is_error_noslot_pfn(req_pfn)) > + return -EINVAL; > + > + resp_pfn = gfn_to_pfn(kvm, gpa_to_gfn(resp_gpa)); > + if (is_error_noslot_pfn(resp_pfn)) { > + ret = EINVAL; > + goto release_req; > + } > + > + if (rmp_make_private(resp_pfn, 0, PG_LEVEL_4K, 0, true)) { > + ret = -EINVAL; > + kvm_release_pfn_clean(resp_pfn); > + goto release_req; > + } I don't see how this is safe. KVM holds no locks, i.e. can't guarantee that the resp_pfn stays private for the duration of the operation. And on the opposite side, KVM can't guarantee that resp_pfn isn't being actively used by something in the kernel, e.g. KVM might induce an unexpected #PF(RMP). Why can't KVM require that the response/destination page already be private? I'm also somewhat confused by the reclaim below. If KVM converts the response page back to shared, doesn't that clobber the data? If so, how does the guest actually get the response? I feel like I'm missing something. Regardless of whether or not I'm missing something, this needs comments, and the changelog needs to be rewritten with --verbose to explain what's going on. > + data.gctx_paddr = __psp_pa(to_kvm_sev_info(kvm)->snp_context); > + data.req_paddr = __sme_set(req_pfn << PAGE_SHIFT); > + data.res_paddr = __sme_set(resp_pfn << PAGE_SHIFT); > + > + ret = sev_issue_cmd(kvm, SEV_CMD_SNP_GUEST_REQUEST, &data, &fw_err); > + if (ret) > + return ret; This leaks both req_pfn and resp_pfn. > + > + /* > + * If reclaim fails then there's a good chance the guest will no longer > + * be runnable so just let userspace terminate the guest. > + */ > + if (snp_page_reclaim(kvm, resp_pfn)) { > + return -EIO; > + goto release_req; > + } > + > + /* > + * As per GHCB spec, firmware failures should be communicated back to > + * the guest via SW_EXITINFO2 rather than be treated as immediately > + * fatal. > + */ > + ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, > + SNP_GUEST_ERR(ret ? SNP_GUEST_VMM_ERR_GENERIC : 0, > + fw_err)); > + > + ret = 1; /* resume guest */ > + kvm_release_pfn_dirty(resp_pfn); > + > +release_req: > + kvm_release_pfn_clean(req_pfn); > + return ret; > +}