On 05/23/2010 03:16 PM, Xiao Guangrong wrote:
Allow more page become asynchronous at getting sp time, if need create new
shadow page for gfn but it not allow unsync(level> 1), we should unsync all
gfn's unsync page
+/* @gfn should be write-protected at the call site */
+static void kvm_sync_pages(struct kvm_vcpu *vcpu, gfn_t gfn)
+{
+ struct hlist_head *bucket;
+ struct kvm_mmu_page *s;
+ struct hlist_node *node, *n;
+ unsigned index;
+ bool flush = false;
+
+ index = kvm_page_table_hashfn(gfn);
+ bucket =&vcpu->kvm->arch.mmu_page_hash[index];
+ hlist_for_each_entry_safe(s, node, n, bucket, hash_link) {
role.direct, role.invalid?
Well, role.direct cannot be unsync. But that's not something we want to
rely on.
This patch looks good too.
Some completely unrelated ideas:
- replace mmu_zap_page() calls in __kvm_sync_page() by setting
role.invalid instead. This reduces problems with the hash list being
modified while we manipulate it.
- add a for_each_shadow_page_direct() { ... } and
for_each_shadow_page_indirect() { ... } to replace the
hlist_for_each_entry_safe()s.
- add kvm_tlb_gather() to reduce IPIs from kvm_mmu_zap_page()
- clear spte.accessed on speculative sptes (for example from invlpg) so
the swapper won't keep them in ram unnecessarily
Again, completely unrelated to this patch set, just wrong them down so I
don't forget them and to get your opinion.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html