Re: [RFC PATCH 0/3] generic hypercall support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Avi Kivity wrote:
> Gregory Haskins wrote:
>
>   
>>>> Ack.  I hope when its all said and done I can convince you that the
>>>> framework to code up those virtio backends in the kernel is vbus ;)
>>>>       
>>> If vbus doesn't bring significant performance advantages, I'll prefer
>>> virtio because of existing investment.
>>>     
>>
>> Just to clarify: vbus is just the container/framework for the in-kernel
>> models.  You can implement and deploy virtio devices inside the
>> container (tho I haven't had a chance to sit down and implement one
>> yet).  Note that I did publish a virtio transport in the last few series
>> to demonstrate how that might work, so its just ripe for the picking if
>> someone is so inclined.
>>
>>   
>
> Yeah I keep getting confused over this.
>
>> So really the question is whether you implement the in-kernel virtio
>> backend in vbus, in some other framework, or just do it standalone.
>>   
>
> I prefer the standalone model.  Keep the glue in userspace.

Just to keep the facts straight:  The glue in userspace vs standalone
model are independent variables.  E.g. you can have the glue in
userspace for vbus, too.  Its not written that way today for KVM, but
its moving in that direction as we work though these subtopics like
irqfd, dynhc, etc.

What vbus buys you as a core technology is that you can write one
backend that works "everywhere" (you only need a glue layer for each
environment you want to support).  You might say "I can make my backends
work everywhere too", and to that I would say "by the time you get it to
work, you will have duplicated almost my exact effort on vbus" ;).  Of
course, you may also say "I don't care if it works anywhere else but
KVM", which is a perfectly valid (if not unfortunate) position to take.

I think the confusion point is possibly a result of the name "vbus". 
The vbus core isn't really true bus in the traditional sense.  It's just
a host-side kernel-based container for these device models.  That is all
I am talking about here.  There is, of course, also an LDM "bus" for
rendering vbus devices in the guest as a function of the current
kvm-specific glue layer Ive written.  Note that this glue layer could
render them as PCI in the future, TBD.

-Greg


Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux