Re: [RFC 1/2] vfio/pci: keep the prefetchable attribute of a BAR region in VMA

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Vikram,

On Fri, 30 Apr 2021 17:57:14 +0100,
Vikram Sethi <vsethi@xxxxxxxxxx> wrote:
> 
> Hi Marc, 
> 
> > -----Original Message-----
> > From: Marc Zyngier <maz@xxxxxxxxxx>
> > Sent: Friday, April 30, 2021 10:31 AM
> > On Fri, 30 Apr 2021 15:58:14 +0100,
> > Shanker R Donthineni <sdonthineni@xxxxxxxxxx> wrote:
> > >
> > > Hi Marc,
> > >
> > > On 4/30/21 6:47 AM, Marc Zyngier wrote:
> > > >
> > > >>>> We've two concerns here:
> > > >>>>    - Performance impacts for pass-through devices.
> > > >>>>    - The definition of ioremap_wc() function doesn't match the
> > > >>>> host kernel on ARM64
> > > >>> Performance I can understand, but I think you're also using it to
> > > >>> mask a driver bug which should be resolved first.  Thank
> > > >> We’ve already instrumented the driver code and found the code path
> > > >> for the unaligned accesses. We’ll fix this issue if it’s not
> > > >> following WC semantics.
> > > >>
> > > >> Fixing the performance concern will be under KVM stage-2 page-table
> > > >> control. We're looking for a guidance/solution for updating stage-2
> > > >> PTE based on PCI-BAR attribute.
> > > > Before we start discussing the *how*, I'd like to clearly understand
> > > > what *arm64* memory attributes you are relying on. We already have
> > > > established that the unaligned access was a bug, which was the
> > > > biggest argument in favour of NORMAL_NC. What are the other
> > requirements?
> > > Sorry, my earlier response was not complete...
> > >
> > > ARMv8 architecture has two features Gathering and Reorder
> > > transactions, very important from a performance point of view. Small
> > > inline packets for NIC cards and accesses to GPU's frame buffer are
> > > CPU-bound operations. We want to take advantages of GRE features to
> > > achieve higher performance.
> > >
> > > Both these features are disabled for prefetchable BARs in VM because
> > > memory-type MT_DEVICE_nGnRE enforced in stage-2.
> > 
> > Right, so Normal_NC is a red herring, and it is Device_GRE that
> > you really are after, right?
> > 
> I think Device GRE has some practical problems.
> 1. A lot of userspace code which is used to getting write combined
> mappings to GPU memory from kernel drivers does memcpy/memset on it
> which can insert ldp/stp which can crash on Device Memory Type. From
> a quick search I didn't find a memcpy_io or memset_io in
> glibc. Perhaps there are some other functions available, but a lot
> of userspace applications that work on x86 and ARM baremetal won't
> work on ARM VMs without such changes. Changes to all of userspace
> may not always be practical, specially if linking to binaries

This seems to go against what Alex was hinting at earlier, which is
that unaligned accesses were not expected on prefetchable regions, and
Shanker latter confirming that it was an actual bug. Where do we stand
here?

> 
> 2. Sometimes even if application is not using memset/memcpy directly, 
> gcc may insert a builtin memcpy/memset. 
> 
> 3. Recompiling all applications with gcc -m strict-align has
> performance issues.  In our experiments that resulted in an increase
> in code size, and also 3-5% performance decrease reliably.  Also, it
> is not always practical to recompile all of userspace, depending on
> who owns the code/linked binaries etc.
> 
> From KVM-ARM point of view, what is it about Normal NC at stage 2
> for Prefetchable BAR (however KVM gets the hint, whether from
> userspace or VMA) that is undesirable vs Device GRE? I couldn't
> think of a difference to devices whether the combining or
> prefetching or reordering happened because of one or the other.

The problem I see is that we have VM and userspace being written in
terms of Write-Combine, which is:

- loosely defined even on x86

- subject to interpretations in the way it maps to PCI

- has no direct equivalent in the ARMv8 collection of memory
  attributes (and Normal_NC comes with speculation capabilities which
  strikes me as extremely undesirable on arbitrary devices)

How do we translate this into something consistent? I'd like to see an
actual description of what we *really* expect from WC on prefetchable
PCI regions, turn that into a documented definition agreed across
architectures, and then we can look at implementing it with one memory
type or another on arm64.

Because once we expose that memory type at S2 for KVM guests, it
becomes ABI and there is no turning back. So I want to get it right
once and for all.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm




[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux