On 6/26/2024 10:09 AM, Binbin Wu wrote:
On 6/26/2024 5:09 AM, Edgecombe, Rick P wrote:
On Tue, 2024-06-25 at 14:54 +0800, Binbin Wu wrote:
+ gpa = vcpu->mmio_fragments[0].gpa;
+ size = vcpu->mmio_fragments[0].len;
Since MMIO cross page boundary is not allowed according to the input
checks from TDVMCALL, these mmio_fragments[] is not needed.
Just use vcpu->run->mmio.phys_addr and vcpu->run->mmio.len?
Can we add a comment or something to that check, on why KVM doesn't
handle it?
Is it documented somewhere in the TDX ABI that it is not expected to be
supported?
TDX GHCI doesn't have such restriction.
According to the reply from Isaku in the below link, I think current
restriction is due to software implementation for simplicity.
https://lore.kernel.org/kvm/20240419173423.GD3596705@xxxxxxxxxxxxxxxxxxxxx/
+ /* Disallow MMIO crossing page boundary for simplicity. */
+ if (((gpa + size - 1) ^ gpa) & PAGE_MASK)
goto error;
According to
https://lore.kernel.org/all/165550567214.4207.3700499203810719676.tip-bot2@tip-bot2/,
for Linux as TDX guest, it rejects EPT violation #VEs that split pages
based on the reason "MMIO accesses are supposed to be naturally
aligned and therefore never cross page boundaries" to handle the
load_unaligned_zeropad() case.
I am not sure "MMIO accesses are supposed to be naturally aligned" is
true for all other OS as TDX guest, though.
Any suggestion?
Had some discussion with Gao, Chao.
For TDX PV MMIO hypercall, it has got the GPA already.
I.e, we don't need to worry about case of "contiguous in virtual memory,
but not be contiguous in physical memory".
Also, the size of the PV MMIO access is limited to 1, 2, 4, 8 bytes. No
need to be split.
So, for TDX, there is no need to use vcpu->mmio_fragments[] even if the
MMIO access crosses page boundary.
The check for "Disallow MMIO crossing page boundary" can be removed and
no extra fragments handling needed.