Izik Eidus wrote:
change the dirty page tracking to work with dirty bity instead of page fault.
right now the dirty page tracking work with the help of page faults, when we
want to track a page for being dirty, we write protect it and we mark it dirty
when we have write page fault, this code move into looking at the dirty bit
of the spte.
I'm concerned about performance during the later stages of live
migration. Even if only 1000 pages are dirty, you still have to look at
2,000,000 or more ptes (for an 8GB guest). That's a lot of overhead.
I think we need to use the page table hierarchy, write protect the upper
page table so we know which page tables we need to look at.
+int is_dirty_and_clean_rmapp(struct kvm *kvm, unsigned long *rmapp)
+{
+ u64 *spte;
+ int dirty = 0;
+
+ if (!shadow_dirty_mask)
+ return 0;
+
+ spte = rmap_next(kvm, rmapp, NULL);
+ while (spte) {
+ if (*spte & PT_DIRTY_MASK) {
+ set_shadow_pte(spte, (*spte &= ~PT_DIRTY_MASK) |
+ SPTE_DONT_DIRTY);
Keep using shadow_dirty_mask here for consistency.
kvm_flush_remote_tlbs(kvm);
+ for (i = 0; i < PT64_ENT_PER_PAGE; ++i) {
+ if (sp->spt[i] & PT_DIRTY_MASK)
+ mark_page_dirty(kvm, sp->gfns[i]);
+ }
shadow_dirty_mask.
@@ -2785,6 +2790,8 @@ static struct kvm_x86_ops svm_x86_ops = {
.set_tss_addr = svm_set_tss_addr,
.get_tdp_level = get_npt_level,
.get_mt_mask = svm_get_mt_mask,
+
+ .dirty_bit_support = svm_dirty_bit_support,
};
Just use shadow_dirty_mask != 0.
+static int vmx_dirty_bit_support(void)
+{
+ return false;
+}
It's false only when ept is enabled.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html