Updated Elvis Upstreaming Roadmap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Thank you all for your comments.
I'm sorry for taking this long to reply, I was away on vacation..

It was a good, long discussion, many issues were raised, which we'd like 
to address with the following proposed roadmap for Elvis patches.
In general, we believe it would be best to start with patches that are 
as simple as possible, providing the basic Elvis functionality, 
and attend to the more complicated issues in subsequent patches.

Here's the road map for Elvis patches: 

1. Shared vhost thread for multiple devices.

The way to go here, we believe, is to start with a patch having a shared 
vhost thread for multiple devices of the SAME vm.
The next step/patch may be handling vms belonging to the same cgroup.

Finally, we need to extend the functionality so that the shared vhost 
thread 
serves multiple vms (not necessarily belonging to the same cgroup).

There was a lot of discussion about the way to address the enforcement 
of cgroup policies, and we will consider the various solutions with a 
future
patch.

2. Creation of vhost threads

We suggested two ways of controlling the creation and removal of vhost
threads: 
- statically determining the maximum number of virtio devices per worker 
via a kernel module parameter 
- dynamically: Sysfs mechanism to add and remove vhost threads 

It seems that it would be simplest to take the static approach as
a first stage. At a second stage (next patch), we'll advance to 
dynamically 
changing the number of vhost threads, using the static module parameter 
only as a default value. 

Regarding cwmq, it is an interesting mechanism, which we need to explore 
further.
At the moment we prefer not to change the vhost model to use cwmq, as some 
of the issues that were discussed, such as cgroups, are not supported by 
cwmq, and this is adding more complexity.
However, we'll look further into it, and consider it at a later stage.

3. Adding polling mode to vhost 

It is a good idea making polling adaptive based on various factors such as 
the I/O rate, the guest kick overhead(which is the tradeoff of polling), 
or the amount of wasted cycles (cycles we kept polling but no new work was 
added).
However, as a beginning polling patch, we would prefer having a naive 
polling approach, which could be tuned with later patches.

4. vhost statistics 

The issue that was raised for the vhost statistics was using ftrace 
instead of the debugfs mechanism.
However, looking further into the kvm stat mechanism, we learned that 
ftrace didn't replace the plain debugfs mechanism, but was used in 
addition to it.
 
We propose to continue using debugfs for statistics, in a manner similar 
to kvm,
and at some point in the future ftrace can be added to vhost as well.
 
Does this plan look o.k.?
If there are no further comments, I'll start preparing the patches 
according to what we've agreed on thus far.
Thank you,
Razya

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux