Re: [PATCH 3/3] KVM: MMU: Consolidate common code in mmu_free_roots()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, May 10, 2013 at 09:43:50AM +0800, Xiao Guangrong wrote:
> On 05/10/2013 09:05 AM, Takuya Yoshikawa wrote:
> > On Thu, 09 May 2013 18:11:31 +0800
> > Xiao Guangrong <xiaoguangrong@xxxxxxxxxxxxxxxxxx> wrote:
> > 
> >> On 05/09/2013 02:46 PM, Takuya Yoshikawa wrote:
> >>> By making the last three statements common to both if/else cases, the
> >>> symmetry between the locking and unlocking becomes clearer. One note
> >>> here is that VCPU's root_hpa does not need to be protected by mmu_lock.
> >>>
> >>> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@xxxxxxxxxxxxx>
> >>> ---
> >>>  arch/x86/kvm/mmu.c |   39 +++++++++++++++++++--------------------
> >>>  1 files changed, 19 insertions(+), 20 deletions(-)
> >>
> >> DO NOT think it makes any thing better.
> >>
> > 
> > Why do we need to do the same thing differently in two paths?
> 
> They are different cases, one is the long mode, anther is PAE mode,
> return from one case and continue to handle the anther case is common
> used, and it can reduce the indentation and easily follow the code.
> 
I agree that this is mostly code style issue and with Takuya patch the
indentation is dipper. Also the structure of mmu_free_roots() resembles
mmu_alloc_shadow_roots() currently. The latter has the same if(){return}
pattern. I also agree with Takuya that locking symmetry can be improved.
What about something like this (untested):

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 40d7b2d..f8ca2f3 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2869,22 +2869,25 @@ static void mmu_free_roots(struct kvm_vcpu *vcpu)
 
 	if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
 		return;
-	spin_lock(&vcpu->kvm->mmu_lock);
+
 	if (vcpu->arch.mmu.shadow_root_level == PT64_ROOT_LEVEL &&
 	    (vcpu->arch.mmu.root_level == PT64_ROOT_LEVEL ||
 	     vcpu->arch.mmu.direct_map)) {
 		hpa_t root = vcpu->arch.mmu.root_hpa;
 
+		spin_lock(&vcpu->kvm->mmu_lock);
 		sp = page_header(root);
 		--sp->root_count;
 		if (!sp->root_count && sp->role.invalid) {
 			kvm_mmu_prepare_zap_page(vcpu->kvm, sp, &invalid_list);
 			kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
 		}
-		vcpu->arch.mmu.root_hpa = INVALID_PAGE;
 		spin_unlock(&vcpu->kvm->mmu_lock);
+		vcpu->arch.mmu.root_hpa = INVALID_PAGE;
 		return;
 	}
+
+	spin_lock(&vcpu->kvm->mmu_lock);
 	for (i = 0; i < 4; ++i) {
 		hpa_t root = vcpu->arch.mmu.pae_root[i];
 

--
			Gleb.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux