Re: [PATCH] Add shared memory PCI device that shares a memory object betweens VMs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



subbu kl wrote:
Cam,

just a wild though about alternative approach.

Ideas are always good.

Once specific set of address range of one guest is visible to other guest its just a matter of DMA/single memcpy will transfer the data across.

My idea is to eliminate unnecessary copying.  This introduces one.

usually non-transparent PCIe bridges(NTB) will be used for inter processor data communication. physical PCIe NTB between two processors just sets up a PCIe data channel with some Address translation stuffs.

So i was just wondering if we can write this non transparent bridge (qemu PCI device) with Addrdess translation capability then guests just can start mmap and start accessing each others memory :)

I think your concept is similar to what Anthony suggested using virtio to export and import other VMs memory. However, RAM and shared memory are not the same thing and having one guest access another's RAM could confuse the guest. With the approach of mapping a BAR, the shared memory is separate from the guest RAM but it can be mapped by the guest processes.

Cam

~subbu

On Thu, Apr 23, 2009 at 4:11 AM, Cam Macdonell <cam@xxxxxxxxxxxxxx <mailto:cam@xxxxxxxxxxxxxx>> wrote:

    subbu kl wrote:

        correct me if wrong,
        can we do the sharing business by writing a non-transparent qemu
        PCI device in host and guests can access each other's address
        space ?


    Hi Subbu,

    I'm a bit confused by your question.  Are you asking how this device
    works or suggesting an alternative approach?  I'm not sure what you
    mean by a non-transparent qemu device.

    Cam


        ~subbu


        On Sun, Apr 19, 2009 at 3:56 PM, Avi Kivity <avi@xxxxxxxxxx
        <mailto:avi@xxxxxxxxxx> <mailto:avi@xxxxxxxxxx
        <mailto:avi@xxxxxxxxxx>>> wrote:

           Cameron Macdonell wrote:


               Hi Avi and Anthony,

               Sorry for the top-reply, but we haven't discussed this aspect
               here before.

               I've been thinking about how to implement interrupts.  As
        far as
               I can tell, unix domain sockets in Qemu/KVM are used
               point-to-point with one VM being the server by specifying
               "server" along with the unix: option.  This works simply
        for two
               VMs, but I'm unsure how this can extend to multiple VMs.  How
               would a server VM know how many clients to wait for?  How can
               messages then be multicast or broadcast?  Is a separate
               "interrupt server" necessary?



           I don't think unix provides a reliable multicast RPC.  So yes, an
           interrupt server seems necessary.

           You could expand its role an make it a "shared memory PCI card
           server", and have it also be responsible for providing the
        backing
           file using an SCM_RIGHTS fd.  That would reduce setup
        headaches for
           users (setting up a file for which all VMs have permissions).

           --    Do not meddle in the internals of kernels, for they are
        subtle and
           quick to panic.


           --
           To unsubscribe from this list: send the line "unsubscribe kvm" in
           the body of a message to majordomo@xxxxxxxxxxxxxxx
        <mailto:majordomo@xxxxxxxxxxxxxxx>
           <mailto:majordomo@xxxxxxxxxxxxxxx
        <mailto:majordomo@xxxxxxxxxxxxxxx>>

           More majordomo info at
         http://vger.kernel.org/majordomo-info.html




-- ~subbu




--
~subbu
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux