On 15/02/2017 19:28, Cao, Lei wrote: > + spin_lock(&kvm->mmu_lock); > + kvm_arch_mmu_enable_log_dirty_pt_masked(kvm, memslot, offset, mask); > + spin_unlock(&kvm->mmu_lock); > + > + while (mask) { > + clear_bit_le(offset + __ffs(mask), memslot->dirty_bitmap); > + mask &= mask - 1; > + } These two steps should be done in the opposite order. So far nothing I cannot fix on commit though (and this is going to be material for 4.12 anyway). Paolo