Re: R/W HG memory mappings with kvm?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jul 6, 2009 at 7:38 PM, Avi Kivity<avi@xxxxxxxxxx> wrote:

>> I see virtio_pci uses cpu_physical_memory_map() which provides either
>> read or write mappings and notes "Use only for reads OR writes - not
>> for read-modify-write operations."
>
> Right, these are for unidirectional transient DMA.

Okay, as I thought. I would rather have 'relatively' persistent
mappings, multi-use, and preferably bi-directional.

>> Is there an alternative method that allows large (Several MB)
>> persistent hg memory mappings that are r/w? I would only be using this
>> under kvm, not kqemu or plain qemu.
>
> All of guest memory is permanently mapped in the host.  You can use
> accessors like cpu_physical_memory_rw() or cpu_physical_memory_map() to
> access it.  What exactly do you need that is not provided by these
> accessors?

I have an existing software system that provides high speed
communication between processes on a single host using shared memory.
I would like to extend the system to provide communication between
processes on the host and guest. Unfortunately the transport is
optimised for speed and is not highly abstracted so I cannot easily
substitute a virtio-ring for example.

The system uses two memory spaces, one is a control area which is
register-like and contains R/W values at various offsets. The second
area is for data transport and is divided into rings. Each ring is
unidirectional so I could map these separately with
cpu_physical_memory_map(), but there seems to be no simple solution
for the control area. Target ring performance is perhaps 1-2
gigabytes/second with rings approx 32-512MB in size.

>> Also it appears that PCI IO memory (cpu_register_io_memory) is
>> provided via access functions, like the pci config space?
>
> It can also use ordinary RAM (for example, vga maps its framebuffer as a PCI
> BAR).

So host memory is exported as a PCI_BAR to the guest via
cpu_register_physical_memory(). It looks like the code has to
explicitly manage marking pages dirty and synchronising at appropriate
times. Is the coherency problem bidirectional, e.g. writes from either
host or guest to the shared memory need to mark pages dirty, and
ensure sync is called before the other side reads those areas?

>> Does this
>> cause a page fault/vm_exit on each read or write, or is it more
>> efficient than that?
>
> It depends on how you configure it.  Look at the vga code (hw/vga.c,
> hw/cirrus_vga.c).  Also Cam (copied) wrote a PCI card that provides shared
> memory across guests, you may want to look at that.

I will look into the vga code and see if I get inspired. The 'copied'
driver sounds interesting, the code is not in kvm git?

Thanks for the response!

Stephen.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux