On Fri, Jan 31, 2020 at 05:16:37PM -0500, Peter Xu wrote: > On Fri, Jan 31, 2020 at 01:29:28PM -0800, Sean Christopherson wrote: > > On Fri, Jan 31, 2020 at 03:55:50PM -0500, Peter Xu wrote: > > > On Fri, Jan 31, 2020 at 12:36:22PM -0800, Sean Christopherson wrote: > > > > On Fri, Jan 31, 2020 at 03:28:24PM -0500, Peter Xu wrote: > > > > > On Fri, Jan 31, 2020 at 11:33:01AM -0800, Sean Christopherson wrote: > > > > > > For the same reason we don't take mmap_sem, it gains us nothing, i.e. KVM > > > > > > still has to use copy_{to,from}_user(). > > > > > > > > > > > > In the proposed __x86_set_memory_region() refactor, vmx_set_tss_addr() > > > > > > would be provided the hva of the memory region. Since slots_lock and SRCU > > > > > > only protect gfn->hva, why would KVM take slots_lock since it already has > > > > > > the hva? > > > > > > > > > > OK so you're suggesting to unlock the lock earlier to not cover > > > > > init_rmode_tss() rather than dropping the whole lock... Yes it looks > > > > > good to me. I think that's the major confusion I got. > > > > > > > > Ya. And I missed where the -EEXIST was coming from. I think we're on the > > > > same page. > > > > > > Good to know. Btw, for me I would still prefer to keep the lock be > > > after the __copy_to_user()s because "HVA is valid without lock" is > > > only true for these private memslots. > > > > No. From KVM's perspective, the HVA is *never* valid. Even if you rewrote > > this statement to say "the gfn->hva translation is valid without lock" it > > would still be incorrect. > > > > KVM is *always* using HVAs without holding lock, e.g. every time it enters > > the guest it is deferencing a memslot because the translations stored in > > the TLB are effectively gfn->hva->hpa. Obviously KVM ensures that it won't > > dereference a memslot that has been deleted/moved, but it's a lot more > > subtle than simply holding a lock. > > > > > After all this is super slow path so I wouldn't mind to take the lock > > > for some time longer. > > > > Holding the lock doesn't affect this super slow vmx_set_tss_addr(), it > > affects everything else that wants slots_lock. Now, admittedly it's > > extremely unlikely userspace is going to do KVM_SET_USER_MEMORY_REGION in > > parallel, but that's not the point and it's not why I'm objecting to > > holding the lock. > > > > Holding the lock implies protection that is *not* provided. You and I know > > it's not needed for copy_{to,from}_user(), but look how long it's taken us > > to get on the same page. A future KVM developer comes along, sees this > > code, and thinks "oh, I need to hold slots_lock to dereference a gfn", and > > propagates the unnecessary locking to some other code. > > At least for a user memory slot, we "need to hold slots_lock to > dereference a gfn" (or srcu), right? Gah, that was supposed to be "dereference a hva". Yes, a gfn->hva lookup requires slots_lock or SRCU read lock. > You know I'm suffering from a jetlag today, I thought I was still > fine, now I start to doubt it. :-) Unintentional gaslighting. Or was it? :-D