Re: [PATCH] KVM: PPC: Add generic hpte management functions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Avi Kivity wrote:
> On 06/28/2010 12:55 PM, Alexander Graf wrote:
>> Avi Kivity wrote:
>>   
>>> On 06/28/2010 12:27 PM, Alexander Graf wrote:
>>>     
>>>>> Am I looking at old code?
>>>>>          
>>>>
>>>> Apparently. Check book3s_mmu_*.c
>>>>        
>>> I don't have that pattern.
>>>      
>> It's in this patch.
>>    
>
> Yes.  Silly me.
>
>>> +static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache
>>> *pte)
>>> +{
>>> +    dprintk_mmu("KVM: Flushing SPT: 0x%lx (0x%llx) ->  0x%llx\n",
>>> +            pte->pte.eaddr, pte->pte.vpage, pte->host_va);
>>> +
>>> +    /* Different for 32 and 64 bit */
>>> +    kvmppc_mmu_invalidate_pte(vcpu, pte);
>>> +
>>> +    if (pte->pte.may_write)
>>> +        kvm_release_pfn_dirty(pte->pfn);
>>> +    else
>>> +        kvm_release_pfn_clean(pte->pfn);
>>> +
>>> +    list_del(&pte->list_pte);
>>> +    list_del(&pte->list_vpte);
>>> +    list_del(&pte->list_vpte_long);
>>> +    list_del(&pte->list_all);
>>> +
>>> +    kmem_cache_free(vcpu->arch.hpte_cache, pte);
>>> +}
>>> +
>>>      
>
> (that's the old one with list_all - better check what's going on here)

Yeah, I just searched my inbox for the first patch. Obviously it was the
old version :(.

>
>
>>>>> (another difference is using struct hlist_head instead of list_head,
>>>>> which I recommend since it saves space)
>>>>>          
>>>> Hrm. I thought about this quite a bit before too, but that makes
>>>> invalidation more complicated, no? We always need to remember the
>>>> previous entry in a list.
>>>>        
>>> hlist_for_each_entry_safe() does that.
>>>      
>> Oh - very nice. So all I need to do is pass the previous list entry to
>> invalide_pte too and I'm good. I guess I'll give it a shot.
>>    
>
> No, just the for_each cursor.
>
>>> Less and simpler code, better reporting through slabtop, less wastage
>>> of partially allocated slab pages.
>>>      
>> But it also means that one VM can spill the global slab cache and kill
>> another VM's mm performance, no?
>>    
>
> What do you mean by spill?
>
> btw, in the midst of the nit-picking frenzy I forgot to ask how the
> individual hash chain lengths as well as the per-vm allocation were
> limited.
>
> On x86 we have a per-vm limit and we allow the mm shrinker to reduce
> shadow mmu data structures dynamically.
>

Very simple. I keep an int with the number of allocated entries around
and if that hits a define'd threshold, I flush all shadow pages.


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux