Re: [RFC 45/48] RISC-V: ioremap: Implement for arch specific ioremap hooks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 4/25/23 01:00, Atish Kumar Patra wrote:
> On Mon, Apr 24, 2023 at 7:18 PM Dave Hansen <dave.hansen@xxxxxxxxx> wrote:
>> On 4/21/23 12:24, Atish Kumar Patra wrote:
>> I'm not _quite_ sure what "guest initiated" means.  But SEV and TDX
>> don't require an ioremap hook like this.  So, even if they *are* "guest
>> initiated", the question still remains how they work without this patch,
>> or what they are missing without it.
> 
> Maybe I misunderstood your question earlier. Are you concerned about guests
> invoking any MMIO region specific calls in the ioremap path or passing
> that information to the host ?

My concern is that I don't know why this patch is here.  There should be
a very simple answer to the question: Why does RISC-V need this patch
but x86 does not?

> Earlier, I assumed the former but it seems you are also concerned
> about the latter as well. Sorry for the confusion in that case.
> The guest initiation is necessary while the host notification can be
> made optional.
> The "guest initiated" means the guest tells the TSM (equivalent of TDX
> module in RISC-V) the MMIO region details.
> The TSM keeps a track of this and any page faults that happen in that
> region are forwarded
> to the host by the TSM after the instruction decoding. Thus TSM can
> make sure that only ioremapped regions are
> considered MMIO regions. Otherwise, all memory outside the guest
> physical region will be considered as the MMIO region.

Ahh, OK, that's a familiar problem.  I see the connection to device
filtering now.

Is this functionality in the current set?  I went looking for it and all
I found was the host notification side.

Is this the only mechanism by which the guest tells the TSM which parts
of the guest physical address space can be exposed to the host?

For TDX and SEV, that information is inferred from a bit in the page
tables.  Essentially, there are dedicated guest physical addresses that
tell the hardware how to treat the mappings: should the secure page
tables or the host's EPT/NPT be consulted?

If that mechanism is different for RISC-V, it would go a long way to
explaining why RISC-V needs this patch.

> In the current CoVE implementation, that MMIO region information is also
> passed to the host to provide additional flexibility. The host may
> choose to do additional
> sanity check and bail if the fault address does not belong to
> requested MMIO regions without
> going to the userspace. This is purely an optimization and may not be mandatory.

Makes sense, thanks for the explanation.

>>> It can be a subset of the region's host provided the layout. The
>>> guest device filtering solution is based on this idea as well [1].
>>>
>>> [1] https://lore.kernel.org/all/20210930010511.3387967-1-sathyanarayanan.kuppuswamy@xxxxxxxxxxxxxxx/
>>
>> I don't really see the connection.  Even if that series was going
>> forward (I'm not sure it is) there is no ioremap hook there.  There's
>> also no guest->host communication in that series.  The guest doesn't
>> _tell_ the host where the MMIO is, it just declines to run code for
>> devices that it didn't expect to see.
> 
> This is a recent version of the above series from tdx github. This is
> a WIP as well and has not been posted to
> the mailing list. Thus, it may be going under revisions as well.
> As per my understanding the above ioremap changes for TDX mark the
> ioremapped pages as shared.
> The guest->host communication happen in the #VE exception handler
> where the guest converts this to a hypercall by invoking TDG.VP.VMCALL
> with an EPT violation set. The host would emulate an MMIO address if
> it gets an VMCALL with EPT violation.
> Please correct me if I am wrong.

Yeah, TDX does:

1. Guest MMIO access
2. Guest #VE handler (if the access faults)
3. Guest hypercall->host
4. Host fixes the fault
5. Hypercall returns, guest returns from #VE via IRET
6. Guest retries MMIO instruction

>From what you said, RISC-V appears to do:

1. Guest MMIO access
2. Host MMIO handler
3. Host handles the fault, returns
4. Guest retries MMIO instruction

In other words, this mechanism does the same thing but short-circuits
the trip through #VE and the hypercall.

What happens if this ioremap() hook is not in place?  Does the hardware
(or TSM) generate an exception like TDX gets?  If so, it's probably
possible to move this "notify the TSM" code to that exception handler
instead of needing an ioremap() hook.

I'm not saying that it's _better_ to do that, but it would allow you to
get rid of this patch for now and get me to shut up. :)

> As I said above, the objective here is to notify the TSM where the 
> MMIO is. Notifying the host is just an optimization that we choose to
> add. In fact, in this series the KVM code doesn't do anything with
> that information. The commit text probably can be improved to clarify
> that.

Just to close the loop here, please go take a look at
pgprot_decrypted().  That's where the x86 guest page table bit gets to
tell the hardware that the mapping might cause a #VE and is under the
control of the host.  That's the extent of what x86 does at ioremap() time.

So, to summarize, we have:

x86:
1. Guest page table bit to mark shared (host) vs. private (guest)
   control
2. #VE if there is a fault on a shared mapping to call into the host

RISC-V:
1. Guest->TSM call to mark MMIO vs. private
2. Faults in the MMIO area are then transparent to the guest

That design difference would, indeed, help explain why this patch is
here.  I'm still not 100% convinced that the patch is *required*, but I
at least understand how we arrived here.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux