Re: [PATCH 29/38] KVM: PPC: Book3S: PR: Rework irq disabling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 17.08.2012, at 23:47, Benjamin Herrenschmidt wrote:

>> 
>> +/* Please call after prepare_to_enter. This function puts the lazy ee state
>> +   back to normal mode, without actually enabling interrupts. */
>> +static inline void kvmppc_lazy_ee_enable(void)
>> +{
>> +#ifdef CONFIG_PPC64
>> +	/* Only need to enable IRQs by hard enabling them after this */
>> +	local_paca->irq_happened = 0;
>> +	local_paca->soft_enabled = 1;
>> +#endif
>> +}
> 
> Smells like the above is the right spot for trace_hardirqs_on() an:

Hrm. Ok :).

> 
>> -		__hard_irq_disable();
>> +		local_irq_disable();
>> 		if (kvmppc_prepare_to_enter(vcpu)) {
>> -			/* local_irq_enable(); */
>> +			local_irq_enable();
>> 			run->exit_reason = KVM_EXIT_INTR;
>> 			r = -EINTR;
>> 		} else {
>> 			/* Going back to guest */
>> 			kvm_guest_enter();
>> +			kvmppc_lazy_ee_enable();
>> 		}
>> 	}
> 
> You should probably do kvmppc_lazy_ee_enable() before guest enter
> so the CPU doesn't appear to the rest of the world that it has
> interrupt disabled while it's in the guest.

I don't think I understand. The patch adds kvmppc_lazy_ee_enable() right before guest entry here, no?

> 
>> @@ -1066,8 +1062,6 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
>> #endif
>> 	ulong ext_msr;
>> 
>> -	preempt_disable();
>> -
>> 	/* Check if we can run the vcpu at all */
>> 	if (!vcpu->arch.sane) {
>> 		kvm_run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>> @@ -1081,9 +1075,9 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
>> 	 * really did time things so badly, then we just exit again due to
>> 	 * a host external interrupt.
>> 	 */
>> -	__hard_irq_disable();
>> +	local_irq_disable();
>> 	if (kvmppc_prepare_to_enter(vcpu)) {
>> -		__hard_irq_enable();
>> +		local_irq_enable();
>> 		kvm_run->exit_reason = KVM_EXIT_INTR;
>> 		ret = -EINTR;
>> 		goto out;
>> @@ -1122,7 +1116,7 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
>> 	if (vcpu->arch.shared->msr & MSR_FP)
>> 		kvmppc_handle_ext(vcpu, BOOK3S_INTERRUPT_FP_UNAVAIL, MSR_FP);
>> 
>> -	kvm_guest_enter();
>> +	kvmppc_lazy_ee_enable();
> 
> Same. BTW, why do you have two enter path ? Smells like a recipe for
> disaster :-)

Because this way we can keep r14-r31 in registers ;). But here too we call kvmppc_lazy_ee_enable() right before going into guest context, no?

I can't shake off the feeling I don't fully understand your comments :)


Alex

> 
> 	ret = __kvmppc_vcpu_run(kvm_run, vcpu);
>> 
>> @@ -1157,7 +1151,6 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
>> 
>> out:
>> 	vcpu->mode = OUTSIDE_GUEST_MODE;
>> -	preempt_enable();
>> 	return ret;
>> }
>> 
>> diff --git a/arch/powerpc/kvm/book3s_rmhandlers.S b/arch/powerpc/kvm/book3s_rmhandlers.S
>> index 9ecf6e3..b2f8258 100644
>> --- a/arch/powerpc/kvm/book3s_rmhandlers.S
>> +++ b/arch/powerpc/kvm/book3s_rmhandlers.S
>> @@ -170,20 +170,21 @@ kvmppc_handler_skip_ins:
>>  * Call kvmppc_handler_trampoline_enter in real mode
>>  *
>>  * On entry, r4 contains the guest shadow MSR
>> + * MSR.EE has to be 0 when calling this function
>>  */
>> _GLOBAL(kvmppc_entry_trampoline)
>> 	mfmsr	r5
>> 	LOAD_REG_ADDR(r7, kvmppc_handler_trampoline_enter)
>> 	toreal(r7)
>> 
>> -	li	r9, MSR_RI
>> -	ori	r9, r9, MSR_EE
>> -	andc	r9, r5, r9	/* Clear EE and RI in MSR value */
>> 	li	r6, MSR_IR | MSR_DR
>> -	ori	r6, r6, MSR_EE
>> -	andc	r6, r5, r6	/* Clear EE, DR and IR in MSR value */
>> -	MTMSR_EERI(r9)		/* Clear EE and RI in MSR */
>> -	mtsrr0	r7		/* before we set srr0/1 */
>> +	andc	r6, r5, r6	/* Clear DR and IR in MSR value */
>> +	/*
>> +	 * Set EE in HOST_MSR so that it's enabled when we get into our
>> +	 * C exit handler function
>> +	 */
>> +	ori	r5, r5, MSR_EE
>> +	mtsrr0	r7
>> 	mtsrr1	r6
>> 	RFI
>> 
>> diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
>> index aae535f..2bd190c 100644
>> --- a/arch/powerpc/kvm/booke.c
>> +++ b/arch/powerpc/kvm/booke.c
>> @@ -486,6 +486,7 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
>> 		ret = -EINTR;
>> 		goto out;
>> 	}
>> +	kvmppc_lazy_ee_enable();
>> 
>> 	kvm_guest_enter();
> 
> Same.
> 
>> @@ -955,6 +956,7 @@ int kvmppc_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu,
>> 		} else {
>> 			/* Going back to guest */
>> 			kvm_guest_enter();
>> +			kvmppc_lazy_ee_enable();
>> 		}
>> 	}
> 
> Same.
> 
>> 
>> diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
>> index 053bfef..545c183 100644
>> --- a/arch/powerpc/kvm/powerpc.c
>> +++ b/arch/powerpc/kvm/powerpc.c
>> @@ -30,6 +30,7 @@
>> #include <asm/kvm_ppc.h>
>> #include <asm/tlbflush.h>
>> #include <asm/cputhreads.h>
>> +#include <asm/irqflags.h>
>> #include "timing.h"
>> #include "../mm/mmu_decl.h"
>> 
>> @@ -93,6 +94,19 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
>> 			break;
>> 		}
>> 
>> +#ifdef CONFIG_PPC64
>> +		/* lazy EE magic */
>> +		hard_irq_disable();
>> +		if (lazy_irq_pending()) {
>> +			/* Got an interrupt in between, try again */
>> +			local_irq_enable();
>> +			local_irq_disable();
>> +			continue;
>> +		}
>> +
>> +		trace_hardirqs_on();
>> +#endif
> 
> And move the trace out as I mentioned.
> 
> Cheers,
> Ben.
> 
>> 		/* Going into guest context! Yay! */
>> 		vcpu->mode = IN_GUEST_MODE;
>> 		smp_wmb();
> 
> 

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux