Hi Paolo,
On 08/04/2017 09:05 AM, Paolo Bonzini wrote:
On 04/08/2017 02:30, Brijesh Singh wrote:
On 8/2/17 5:42 AM, Paolo Bonzini wrote:
On 01/08/2017 15:36, Brijesh Singh wrote:
The flow is:
hardware walks page table; L2 page table points to read only memory
-> pf_interception (code =
-> kvm_handle_page_fault (need_unprotect = false)
-> kvm_mmu_page_fault
-> paging64_page_fault (for example)
-> try_async_pf
map_writable set to false
-> paging64_fetch(write_fault = true, map_writable = false,
prefault = false)
-> mmu_set_spte(speculative = false, host_writable = false,
write_fault = true)
-> set_spte
mmu_need_write_protect returns true
return true
write_fault == true -> set emulate = true
return true
return true
return true
emulate
Without this patch, emulation would have called
..._gva_to_gpa_nested
-> translate_nested_gpa
-> paging64_gva_to_gpa
-> paging64_walk_addr
-> paging64_walk_addr_generic
set fault (nested_page_fault=true)
and then:
kvm_propagate_fault
-> nested_svm_inject_npf_exit
maybe then safer thing would be to qualify the new error_code check with
!mmu_is_nested(vcpu) or something like that. So that way it would run on
L1 guest, and not the L2 guest. I believe that would restrict it avoid
hitting this case. Are you okay with this change ?
Or check "vcpu->arch.mmu.direct_map"? That would be true when not using
shadow pages.
Yes that can be used.
Are you going to send a patch for this?
Yes. I should be posting it by Monday or Tuesday - need sometime to verify it.
-Brijesh