(2012/02/06 12:40), Xiao Guangrong wrote:
On 02/05/2012 07:42 PM, Takuya Yoshikawa wrote:
From: Takuya Yoshikawa<yoshikawa.takuya@xxxxxxxxxxxxx>
This patch fixes a race introduced by:
commit 95d4c16ce78cb6b7549a09159c409d52ddd18dae
KVM: Optimize dirty logging by rmap_write_protect()
During protecting pages for dirty logging, other threads may also try
to protect a page in mmu_sync_children() or kvm_mmu_get_page().
In such a case, because get_dirty_log releases mmu_lock before flushing
TLB's, the following race condition can happen:
A (get_dirty_log) B (another thread)
lock(mmu_lock)
clear pte.w
unlock(mmu_lock)
lock(mmu_lock)
pte.w is already cleared
unlock(mmu_lock)
skip TLB flush
return
...
TLB flush
Though thread B assumes the page has already been protected when it
returns, the remaining TLB entry will break that assumption.
This patch fixes this problem by making get_dirty_log hold the mmu_lock
until it flushes the TLB's.
I do not think this is a problem since the dirty page is logged when
the writeable spte is being set, and in the end of get_dirty_log, all
TLBs are always flushed.
The victim is not GET_DIRTY_LOG but thread B; it needs to assure the page
is protected before returning.
Thanks,
Takuya
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html