Re: [PATCH 3/3] arm, arm64: KVM: handle potential incoherency of readonly memslots

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/20/2014 12:08 AM, Laszlo Ersek wrote:
> On 11/20/14 00:32, Mario Smarduch wrote:
>> Hi Laszlo,
>>
>> couple observations.
>>
>>      I'm wondering if access from qemu and guest won't
>> result in mixed memory attributes and if that's acceptable
>> to the CPU.
> 
> Normally this would be a problem I think (Jon raised the topic of live
> migration). However, for flash programming specifically, I think the
> guest's access pattern ensures that we'll see things OK.
> 
> When the guest issues the first write access, the memslot is deleted,
> and everything is forwarded to qemu, both reads and writes. In response
> qemu modifies the array that *otherwise* backs the flash. These
> modifications by qemu end up in the dcache mostly. When the guest is
> done "programming", it writes a special command (read array mode) at
> which point the memslot is recreated (as read-only) and flushed / set up
> for flushing during demand paging.
> 
> So from the emulated flash POV, the memslot either doesn't exist at all
> (and then qemu serves all accesses just fine), or it exists r/o, at
> which point qemu (host userspace) will have stopped writing to it, and
> will have set it up for flushing before and during guest read accesses.

I think beyond consistency, there should be no double mappings with
conflicting attributes at any time or CPU state is undefined. At least
that's what I recall for cases where identity mapping was cacheble and user
mmapp'ed regions uncacheable. Side effects like CPU hardstop or
victim invalidate of dirty cache line. With virtualization
extensions maybe behavior is different. I guess if you're not seeing
lock ups or crashes then it appears to work :) Probably more senior
folks in ARM community are in better position to address this,
but I thought I raise a flag.

> 
>> Also is if you update memory from qemu you may break
>> dirty page logging/migration.
> 
> Very probably. Jon said the same thing.
> 
>> Unless there is some other way
>> you keep track. Of course it may not be applicable in your
>> case (i.e. flash unused after boot).
> 
> The flash *is* used after boot, because the UEFI runtime variable
> services *are* exercised by the guest kernel. However those use the same
> access pattern (it's the same set of UEFI services just called by a
> different "client").
> 
> *Uncoordinated* access from guest and host in parallel will be a big
> problem; but we're not that far yet, and we need to get the flash
> problem sorted, so that we can at least boot and work on the basic
> stuff. The flash programming dance happens to provide coordination; the
> flash mode changes (which are equivalent to the teardown and the
> recreation of the memslot) can be considered barriers.
> 
> I hope this is acceptable for the time being...

Yeah I understand you have a more imediatte requirement to support,
migration
isssue is more fyi. Thanks for the details helps to understand the context.

- Mario
> 
> Thanks
> Laszlo
> 
>>
>> - Mario
>>
>> On 11/17/2014 07:49 AM, Laszlo Ersek wrote:
>>> On 11/17/14 16:29, Paolo Bonzini wrote:
>>>>
>>>>
>>>> On 17/11/2014 15:58, Ard Biesheuvel wrote:
>>>>> Readonly memslots are often used to implement emulation of ROMs and
>>>>> NOR flashes, in which case the guest may legally map these regions as
>>>>> uncached.
>>>>> To deal with the incoherency associated with uncached guest mappings,
>>>>> treat all readonly memslots as incoherent, and ensure that pages that
>>>>> belong to regions tagged as such are flushed to DRAM before being passed
>>>>> to the guest.
>>>>
>>>> On x86, the processor combines the cacheability values from the two
>>>> levels of page tables.  Is there no way to do the same on ARM?
>>>
>>> Combining occurs on ARMv8 too. The Stage1 (guest) mapping is very strict
>>> (Device non-Gathering, non-Reordering, no Early Write Acknowledgement --
>>> for EFI_MEMORY_UC), which basically "overrides" the Stage2 (very lax
>>> host) memory attributes.
>>>
>>> When qemu writes, as part of emulating the flash programming commands,
>>> to the RAMBlock that *otherwise* backs the flash range (as a r/o
>>> memslot), those writes (from host userspace) tend to end up in dcache.
>>>
>>> But, when the guest flips back the flash to romd mode, and tries to read
>>> back the values from the flash as plain ROM, the dcache is completely
>>> bypassed due to the strict stage1 mapping, and the guest goes directly
>>> to DRAM.
>>>
>>> Where qemu's earlier writes are not yet / necessarily visible.
>>>
>>> Please see my original patch (which was incomplete) in the attachment,
>>> it has a very verbose commit message.
>>>
>>> Anyway, I'll let others explain; they can word it better than I can :)
>>>
>>> FWIW,
>>>
>>> Series
>>> Reviewed-by: Laszlo Ersek <lersek@xxxxxxxxxx>
>>>
>>> I ported this series to a 3.17.0+ based kernel, and tested it. It works
>>> fine. The ROM-like view of the NOR flash now reflects the previously
>>> programmed contents.
>>>
>>> Series
>>> Tested-by: Laszlo Ersek <lersek@xxxxxxxxxx>
>>>
>>> Thanks!
>>> Laszlo
>>>
>>>
>>>
>>> _______________________________________________
>>> kvmarm mailing list
>>> kvmarm@xxxxxxxxxxxxxxxxxxxxx
>>> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
>>>
>>
> 

_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm




[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux