On 06/21/2010 04:44 PM, Alexander Graf wrote:
Currently the shadow paging code keeps an array of entries it knows about. Whenever the guest invalidates an entry, we loop through that entry, trying to invalidate matching parts. While this is a really simple implementation, it is probably the most ineffective one possible. So instead, let's keep an array of lists around that are indexed by a hash. This way each PTE can be added by 4 list_add, removed by 4 list_del invocations and the search only needs to loop through entries that share the same hash. This patch implements said lookup and exports generic functions that both the 32-bit and 64-bit backend can use.
Mind explaining the all list in there?
+ +static inline u64 kvmppc_mmu_hash_pte(u64 eaddr) { + return hash_64(eaddr>> PTE_SIZE, HPTEG_HASH_BITS); +} + +static inline u64 kvmppc_mmu_hash_vpte(u64 vpage) { + return hash_64(vpage& 0xfffffffffULL, HPTEG_HASH_BITS); +} + +static inline u64 kvmppc_mmu_hash_vpte_long(u64 vpage) { + return hash_64((vpage& 0xffffff000ULL)>> 12, HPTEG_HASH_BITS); +}
Please use ordinary formatting for the functions above.
+/* Flush with mask 0xffffff000 */ +static void kvmppc_mmu_pte_vflush_long(struct kvm_vcpu *vcpu, u64 guest_vp) +{ + struct list_head *list; + struct hpte_cache *pte, *tmp; + u64 vp_mask = 0xffffff000ULL; + + list =&vcpu->arch.hpte_hash_vpte_long[kvmppc_mmu_hash_vpte_long(guest_vp)]; + + /* No entries to flush */ + if (!list) + return; + + /* Check the list for matching entries */ + list_for_each_entry_safe(pte, tmp, list, list_vpte_long) + /* Jump over the helper entry */ + if (&pte->list_vpte_long == list) + continue; + + if ((pte->pte.vpage& vp_mask) == guest_vp) + invalidate_pte(vcpu, pte); +}
C wants brackets around blocks. -- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html