Re: [PATCH 6/8] KVM: PPC: E500: Implement MMU notifiers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 07.08.2012, at 15:30, Avi Kivity <avi@xxxxxxxxxx> wrote:

> On 08/07/2012 01:57 PM, Alexander Graf wrote:
>> The e500 target has lived without mmu notifiers ever since it got
>> introduced, but fails for the user space check on them with hugetlbfs.
>> 
>> So in order to get that one working, implement mmu notifiers in a
>> reasonably dumb fashion and be happy. On embedded hardware, we almost
>> never end up with mmu notifier calls, since most people don't overcommit.
>> 
>> 
>> +static void kvmppc_check_requests(struct kvm_vcpu *vcpu)
>> +{
>> +#if defined(CONFIG_KVM_E500V2) || defined(CONFIG_KVM_E500MC)
>> +    if (vcpu->requests)
>> +        if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu))
>> +            kvmppc_core_flush_tlb(vcpu);
>> +#endif
>> +}
>> +
>> /*
>>  * Common checks before entering the guest world.  Call with interrupts
>>  * disabled.
>> @@ -485,12 +494,24 @@ static int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
>>            break;
>>        }
>> 
>> +        smp_mb();
>> +        kvmppc_check_requests(vcpu);
>> +
> 
> On x86 we do the requests processing while in normal preemptible
> context, then do an additional check for requests != 0 during guest
> entry.  This allows us to do sleepy things in request processing, and
> reduces the amount of work we do with interrupts disabled.

Hrm. We could do the same I guess. Let me give it a try.

> 
>>        if (kvmppc_core_prepare_to_enter(vcpu)) {
>>            /* interrupts got enabled in between, so we
>>               are back at square 1 */
>>            continue;
>>        }
>> 
>> +        if (vcpu->mode == EXITING_GUEST_MODE) {
>> +            r = 1;
>> +            break;
>> +        }
>> +
>> +        /* Going into guest context! Yay! */
>> +        vcpu->mode = IN_GUEST_MODE;
>> +        smp_wmb();
>> +
>>        break;
>>    }
>> 
>> @@ -560,6 +581,8 @@ int kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
>> #endif
>> 
>>    kvm_guest_exit();
>> +    vcpu->mode = OUTSIDE_GUEST_MODE;
>> +    smp_wmb();
>> 
>> +/************* MMU Notifiers *************/
>> +
>> +int kvm_unmap_hva(struct kvm *kvm, unsigned long hva)
>> +{
>> +    /* Is this a guest page? */
>> +    if (!hva_to_memslot(kvm, hva))
>> +        return 0;
>> +
>> +    /*
>> +     * Flush all shadow tlb entries everywhere. This is slow, but
>> +     * we are 100% sure that we catch the to be unmapped page
>> +     */
>> +    kvm_flush_remote_tlbs(kvm);
> 
> Wow.

Yeah, cool, eh? It sounds worse than it is. Usually when we need to page out, we're under memory pressure. So we would get called multiple times to unmap different pages. If we just drop all shadow tlb entries, we also freed a lot of memory that can now be paged out without callbacks.

> 
>> +
>> +    return 0;
>> +}
>> +
> 
> Where do you drop the reference count when installing a page in a shadow
> tlb entry?

Which reference count? Essentially the remote tlb flush calls kvmppc_e500_prov_release() on all currently mapped shadow tlb entries. Are we missing out on something more?


Alex

> 
> 
> -- 
> error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux