Anthony Liguori wrote:
I'd strongly recommend working these patches on qemu-devel and
lkml. I suspect Avi may disagree with me, but in order for this
to be eventually merged in either place, you're going to have
additional requirements put on you.
I don't disagree with the fact that there will be additional
requirements, but I might disagree with some of those additional
requirements themselves.
It actually works out better than I think you expect it to...
Can you explain why? You haven't addressed my concerns the last time
around.
Because of the qemu_ram_alloc() patches. We no longer have a
contiguous phys_ram_base so we don't have to deal with
mmap(MAP_FIXED). We can also more practically do memory hot-add which
is more or less a requirement of this work.
I think you're arguing my side. If the guest specifies the memory to be
shared via an add_buf() sglist allocated from its free memory, you have
to use MAP_FIXED (since the gpa->hva mapping is already fixed for guest
memory). If it's provided as a BAR or equivalent, we can use a variant
of qemu_ram_alloc() which binds to the shared segment instead of allocating.
It also means we could do shared memory through more traditional means
too like sys v ipc or whatever is the native mechanism on the
underlying platform. That means we could even support Win32 (although
I wouldn't make that an initial requirement).
Not with add_buf() memory...
We can't use mmap() directly. With the new RAM allocation scheme, I
think it's pretty reasonable to now allow portions of ram to come
from files that get mmap() (sort of like -mem-path).
This RAM area could be setup as a BAR.
That's what Cam's patch does, and what you objected to.
I'm flexible. BARs are pretty unattractive because of the size
requirements.
What size requirements? The PCI memory hole? Those requirements are
easily lifted.
The actual transport implementation is the least important part though
IMHO. The guest interface and how it's implemented within QEMU is
much more important to get right the first time.
I agree, with much more emphasis on the guest/host interface.
Why is that unimplementable?
Bad choice of words - it's implementable, just not very usable. You
can't share 1GB in a 256MB guest, will fragment host vmas, no
guarantee the guest can actually allocate all that memory, doesn't
work with large pages, what happens on freeing, etc.
You can share 1GB with a PCI BAR today. You're limited to 32-bit
addresses which admittedly we could fix.
Any reason to bother with BARs instead of just picking unused physical
addresses? Does Windows do anything special with BAR addresses?
If you use a BAR you let the host kernel know what you're doing. No
doubt you could do the same thing yourself (the PCI support functions
call the raw support functions), but if you use a BAR, everything from
the BIOS onwards is plumbed down.
Sure we could do something independent a la vbus, but my preference has
always been to behave like real hardware.
Oh, and if it's a BAR you can use device assignment. You can't assign a
device that exposes memory the host doesn't know about.
The QEMU bits and the device model bits are actually relatively
simple. The part that I think needs more deep thought is the
guest-visible interface.
A char device is probably not the best interface. I think you want
something like tmpfs/hugetlbfs.
Yes those are so wonderful to work with.
qemu -ivshmem
file=/dev/shm/ring.shared,name=shared-ring,size=1G,notify=/path/to/socket
/path/to/socket is used to pass an eventfd
Within the guest, you'd have:
/dev/ivshmemfs/shared-ring
An app would mmap() that file, and then could do something like an
ioctl() to get an eventfd.
Alternatively, you could have something like:
/dev/ivshmemfs/mem/shared-ring
/dev/ivshmemfs/notify/shared-ring
Where notify/shared-ring behaves like an eventfd().
Being the traditionalist that I am, I'd much prefer it to be a char
device and use udev rules to get a meaningful name if needed. That's
how every other real device works.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html