On Thu, Aug 25, 2022 at 7:41 AM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > On Wed, Aug 24, 2022, Jim Mattson wrote: > > On Wed, Aug 24, 2022 at 5:11 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > > > > @google folks, what would it take for us to mark KVM_REQ_GET_NESTED_STATE_PAGES > > > as deprecated in upstream and stop accepting patches/fixes? IIUC, when we eventually > > > move to userfaultfd, all this goes away, i.e. we do want to ditch this at some point. > > > > Userfaultfd is a red herring. There were two reasons that we needed > > this when nested live migration was implemented: > > 1) our netlink socket mechanism for funneling remote page requests to > > a userspace listener was broken. > > 2) we were not necessarily prepared to deal with remote page requests > > during VM setup. > > > > (1) has long since been fixed. Though our preference is to exit from > > KVM_RUN and get the vCPU thread to request the remote page itself, we > > are now capable of queuing a remote page request with a separate > > listener thread and blocking in the kernel until the page is received. > > I believe that mechanism is functionally equivalent to userfaultfd, > > though not as elegant. > > I don't know about (2). I'm not sure when the listener thread is set > > up, relative to all of the other setup steps. Eliminating > > KVM_REQ_GET_NESTED_STATE_PAGES means that userspace must be prepared > > to fetch a remote page by the first call to KVM_SET_NESTED_STATE. The > > same is true when using userfaultfd. > > > > These new ordering constraints represent a UAPI breakage, but we don't > > seem to be as concerned about that as we once were. Maybe that's a > > good thing. Can we get rid of all of the superseded ioctls, like > > KVM_SET_CPUID, while we're at it? > > I view KVM_REQ_GET_NESTED_STATE_PAGES as a special case. We are likely the only > users, we can (eventually) wean ourselves off the feature, and we can carry > internal patches (which we are obviously already carrying) until we transition > away. And unlike KVM_SET_CPUID and other ancient ioctls() that are largely > forgotten, this feature is likely to be a maintenance burden as long as it exists. KVM_REQ_GET_NESTED_STATE_PAGES has been uniformly used in KVM_SET_NESTED_STATE ioctl in VMX (including eVMCS) and SVM, it is basically a two-step setting up of a nested state mechanism. We can change that, but this may have side effects and I think this usage case has nothing to do with demand paging. I noticed that nested_vmx_enter_non_root_mode() is called in KVM_SET_NESTED_STATE in VMX, while in SVM implementation, it is simply just a kvm_make_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu); hmm... so is the nested_vmx_enter_non_root_mode() call in vmx KVM_SET_NESTED_STATE ioctl() still necessary? I am thinking that because the same function is called again in nested_vmx_run(). Thanks. -Mingwei