On Mon, May 15, 2023, Zhi A Wang wrote: > On 5/13/2023 8:35 AM, Sean Christopherson wrote: > > Move the check that a vGPU is attacked from is_2MB_gtt_possible() to its > > sole caller, ppgtt_populate_shadow_entry(). All of the paths in > > ppgtt_populate_shadow_entry() eventually check for attachment by way of > > intel_gvt_dma_map_guest_page(), but explicitly checking can avoid > > unnecessary work and will make it more obvious that a future cleanup of > > is_2MB_gtt_possible() isn't introducing a bug. > > > > It might be better move this check to shadow_ppgtt_mm() which is used > in both shadow page table creation and pinning path so that the path > can bail out even earlier when creating a shadow page table but a vGPU > has not been attached to KVM yet. Ah, yes, that'll work. I traced through all of the paths that lead to ppgtt_populate_shadow_entry(), and shadow_ppgtt_mm() is the only caller that isn't already gated by INTEL_VGPU_STATUS_ATTACHED or INTEL_VGPU_STATUS_ACTIVE (ACTIVE is set iff ATTACHED is set). I'll move the check up to shadow_ppgtt_mm() in the next version. Thanks! workload_thread() <= pick_next_workload() => INTEL_VGPU_STATUS_ACTIVE | -> dispatch_workload() | |-> prepare_workload() | -> intel_vgpu_sync_oos_pages() | | | |-> ppgtt_set_guest_page_sync() | | | |-> sync_oos_page() | | | |-> ppgtt_populate_shadow_entry() | |-> intel_vgpu_flush_post_shadow() | 1: |-> ppgtt_handle_guest_write_page_table() | |-> ppgtt_handle_guest_entry_add() | 2: | -> ppgtt_populate_spt_by_guest_entry() | | | |-> ppgtt_populate_spt() | | | |-> ppgtt_populate_shadow_entry() | | | |-> ppgtt_populate_spt_by_guest_entry() [see 2] | |-> ppgtt_populate_shadow_entry() kvmgt_page_track_write() <= KVM callback => INTEL_VGPU_STATUS_ATTACHED | |-> intel_vgpu_page_track_handler() | |-> ppgtt_write_protection_handler() | |-> ppgtt_handle_guest_write_page_table_bytes() | |-> ppgtt_handle_guest_write_page_table() [see 1]