On Wed, Jun 23, 2021, Paolo Bonzini wrote: > On 22/06/21 19:56, Sean Christopherson wrote: > > When creating a new upper-level shadow page, zap unsync shadow pages at > > the same target gfn instead of attempting to sync the pages. This fixes > > a bug where an unsync shadow page could be sync'd with an incompatible > > context, e.g. wrong smm, is_guest, etc... flags. In practice, the bug is > > relatively benign as sync_page() is all but guaranteed to fail its check > > that the guest's desired gfn (for the to-be-sync'd page) matches the > > current gfn associated with the shadow page. I.e. kvm_sync_page() would > > end up zapping the page anyways. > > > > Alternatively, __kvm_sync_page() could be modified to explicitly verify > > the mmu_role of the unsync shadow page is compatible with the current MMU > > context. But, except for this specific case, __kvm_sync_page() is called > > iff the page is compatible, e.g. the transient sync in kvm_mmu_get_page() > > requires an exact role match, and the call from kvm_sync_mmu_roots() is > > only synchronizing shadow pages from the current MMU (which better be > > compatible or KVM has problems). And as described above, attempting to > > sync shadow pages when creating an upper-level shadow page is unlikely > > to succeed, e.g. zero successful syncs were observed when running Linux > > guests despite over a million attempts. > > One issue, this WARN_ON may now trigger: > > WARN_ON(!list_empty(&invalid_list)); > > due to a kvm_mmu_prepare_zap_page that could have happened on an earlier > iteration of the for_each_valid_sp. Before your change, __kvm_sync_page > would be called always before kvm_sync_pages could add anything to > invalid_list. Ah, I should have added a comment. It took me a few minutes of staring to remember why it can't fire. The branch at (2), which adds to invalid_list, is taken if and only if the new page is not a 4k page. The branch at (3) is taken if and only if the existing page is a 4k page, because only 4k pages can become unsync. Because the shadow page's level is incorporated into its role, if the level of the new page is >4k, the branch at (1) will be taken for all 4k shadow pages. Maybe something like this for a comment? /* * Assert that the page was not zapped if the "sync" was * successful. Note, this cannot collide with the above * zapping of unsync pages, as this point is reached iff * the new page is a 4k page (only 4k pages can become * unsync and the role check ensures identical levels), * and zapping occurs iff the new page is NOT a 4k page. */ WARN_ON(!list_empty(&invalid_list)); 1) if (sp->role.word != role.word) { /* * If the guest is creating an upper-level page, zap * unsync pages for the same gfn. While it's possible * the guest is using recursive page tables, in all * likelihood the guest has stopped using the unsync * page and is installing a completely unrelated page. * Unsync pages must not be left as is, because the new * upper-level page will be write-protected. */ 2) if (level > PG_LEVEL_4K && sp->unsync) kvm_mmu_prepare_zap_page(vcpu->kvm, sp, &invalid_list); continue; } if (direct_mmu) goto trace_get_page; 3) if (sp->unsync) { /* * The page is good, but is stale. "Sync" the page to * get the latest guest state, but don't write-protect * the page and don't mark it synchronized! KVM needs * to ensure the mapping is valid, but doesn't need to * fully sync (write-protect) the page until the guest * invalidates the TLB mapping. This allows multiple * SPs for a single gfn to be unsync. * * If the sync fails, the page is zapped. If so, break * If so, break in order to rebuild it. */ if (!kvm_sync_page(vcpu, sp, &invalid_list)) break; WARN_ON(!list_empty(&invalid_list)); kvm_make_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu); }