I'm finally back in the office. Sorry for not getting back to you sooner. I don't think it would be easy to send the synchronization changes first. The reason they seem so small is that they're all handled by the iterator. If we tried to put the synchronization changes in without the iterator we'd have to 1.) deal with struct kvm_mmu_pages, 2.) deal with the rmap, and 3.) change a huge amount of code to insert the synchronization changes into the existing framework. The changes wouldn't be mechanical or easy to insert either since a lot of bookkeeping is currently done before PTEs are updated, with no facility for rolling back the bookkeeping on PTE cmpxchg failure. We could start with the iterator changes and then do the synchronization changes, but the other way around would be very difficult. On Wed, Nov 27, 2019 at 11:09 AM Sean Christopherson <sean.j.christopherson@xxxxxxxxx> wrote: > > On Thu, Sep 26, 2019 at 04:17:56PM -0700, Ben Gardon wrote: > > The goal of this RFC is to demonstrate and gather feedback on the > > iterator pattern, the memory savings it enables for the "direct case" > > and the changes to the synchronization model. Though they are interwoven > > in this series, I will separate the iterator from the synchronization > > changes in a future series. I recognize that some feature work will be > > needed to make this patch set ready for merging. That work is detailed > > at the end of this cover letter. > > How difficult would it be to send the synchronization changes as a separate > series in the not-too-distant future? At a brief glance, those changes > appear to be tiny relative to the direct iterator changes. From a stability > perspective, it would be nice if the locking changes can get upstreamed and > tested in the wild for a few kernel versions before the iterator code is > introduced.