Re: [PATCH Part2 v5 00/45] Add AMD Secure Nested Paging (SEV-SNP) Hypervisor Support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Mon, Nov 15, 2021, at 7:07 PM, Marc Orr wrote:
> On Mon, Nov 15, 2021 at 11:15 AM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
>>
>> +arm64 KVM folks
>>
>> On Mon, Nov 15, 2021, Marc Orr wrote:
>> > On Mon, Nov 15, 2021 at 10:26 AM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
>> > >
>> > > On Mon, Nov 15, 2021, Dr. David Alan Gilbert wrote:
>> > > > * Sean Christopherson (seanjc@xxxxxxxxxx) wrote:
>> > > > > On Fri, Nov 12, 2021, Borislav Petkov wrote:
>> > > > > > On Fri, Nov 12, 2021 at 09:59:46AM -0800, Dave Hansen wrote:
>> > > > > > > Or, is there some mechanism that prevent guest-private memory from being
>> > > > > > > accessed in random host kernel code?
>> > > > >
>> > > > > Or random host userspace code...
>> > > > >
>> > > > > > So I'm currently under the impression that random host->guest accesses
>> > > > > > should not happen if not previously agreed upon by both.
>> > > > >
>> > > > > Key word "should".
>> > > > >
>> > > > > > Because, as explained on IRC, if host touches a private guest page,
>> > > > > > whatever the host does to that page, the next time the guest runs, it'll
>> > > > > > get a #VC where it will see that that page doesn't belong to it anymore
>> > > > > > and then, out of paranoia, it will simply terminate to protect itself.
>> > > > > >
>> > > > > > So cloud providers should have an interest to prevent such random stray
>> > > > > > accesses if they wanna have guests. :)
>> > > > >
>> > > > > Yes, but IMO inducing a fault in the guest because of _host_ bug is wrong.
>> > > >
>> > > > Would it necessarily have been a host bug?  A guest telling the host a
>> > > > bad GPA to DMA into would trigger this wouldn't it?
>> > >
>> > > No, because as Andy pointed out, host userspace must already guard against a bad
>> > > GPA, i.e. this is just a variant of the guest telling the host to DMA to a GPA
>> > > that is completely bogus.  The shared vs. private behavior just means that when
>> > > host userspace is doing a GPA=>HVA lookup, it needs to incorporate the "shared"
>> > > state of the GPA.  If the host goes and DMAs into the completely wrong HVA=>PFN,
>> > > then that is a host bug; that the bug happened to be exploited by a buggy/malicious
>> > > guest doesn't change the fact that the host messed up.
>> >
>> > "If the host goes and DMAs into the completely wrong HVA=>PFN, then
>> > that is a host bug; that the bug happened to be exploited by a
>> > buggy/malicious guest doesn't change the fact that the host messed
>> > up."
>> > ^^^
>> > Again, I'm flabbergasted that you are arguing that it's OK for a guest
>> > to exploit a host bug to take down host-side processes or the host
>> > itself, either of which could bring down all other VMs on the machine.
>> >
>> > I'm going to repeat -- this is not OK! Period.
>>
>> Huh?  At which point did I suggest it's ok to ship software with bugs?  Of course
>> it's not ok to introduce host bugs that let the guest crash the host (or host
>> processes).  But _if_ someone does ship buggy host software, it's not like we can
>> wave a magic wand and stop the guest from exploiting the bug.  That's why they're
>> such a big deal.
>>
>> Yes, in this case a very specific flavor of host userspace bug could be morphed
>> into a guest exception, but as mentioned ad nauseum, _if_ host userspace has bug
>> where it does not properly validate a GPA=>HVA, then any such bug exists and is
>> exploitable today irrespective of SNP.
>
> If I'm understanding you correctly, you're saying that we will never
> get into the host's page fault handler due to an RMP violation if we
> implement the unmapping guest private memory proposal (without bugs).
>
> However, bugs do happen. And the host-side page fault handler will
> have code to react to an RMP violation (even if it's supposedly
> impossible to hit). I'm saying that the host-side page fault handler
> should NOT handle an RMP violation by killing host-side processes or
> the kernel itself. This is detrimental to host reliability.
>
> There are two ways to handle this. (1) Convert the private page
> causing the RMP violation to shared, (2) Kill the guest.

It’s time to put on my maintainer hat. This is solidly in my territory, and NAK.  A kernel privilege fault, from who-knows-what context (interrupts off? NMI? locks held?) that gets an RMP violation with no exception handler is *not* going to blindly write the RMP and retry.  It’s not going to send flush IPIs or call into KVM to “fix” things.  Just the locking issues alone are probably showstopping, even ignoring the security and sanity issues.

You are welcome to submit patches to make the panic_on_oops=0 case as robust as practical, and you are welcome to keep running other VMs after we die() and taint the kernel, but that is strictly best effort and is a bad idea in a highly security sensitive environment.

And that goes for everyone else here too. If you all have a brilliant idea to lazily fix RMP faults (and the actual semantics are reasonable) and you want my NAK to go away, I want to see a crystal clear explanation of what you plan to put in fault.c, how the locking is supposed to work, and how you bound retries such that the system will reliably OOPS if something goes wrong instead of retrying forever or deadlocking.

Otherwise can we please get on with designing a reasonable model for guest-private memory please?




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux