Re: Elvis upstreaming plan

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 27, 2013 at 12:18:51PM +0200, Abel Gordon wrote:
> 
> 
> Jason Wang <jasowang@xxxxxxxxxx> wrote on 27/11/2013 04:49:20 AM:
> 
> >
> > On 11/24/2013 05:22 PM, Razya Ladelsky wrote:
> > > Hi all,
> > >
> > > I am Razya Ladelsky, I work at IBM Haifa virtualization team, which
> > > developed Elvis, presented by Abel Gordon at the last KVM forum:
> > > ELVIS video:  https://www.youtube.com/watch?v=9EyweibHfEs
> > > ELVIS slides:
> https://drive.google.com/file/d/0BzyAwvVlQckeQmpnOHM5SnB5UVE
> > >
> > >
> > > According to the discussions that took place at the forum, upstreaming
> > > some of the Elvis approaches seems to be a good idea, which we would
> like
> > > to pursue.
> > >
> > > Our plan for the first patches is the following:
> > >
> > > 1.Shared vhost thread between mutiple devices
> > > This patch creates a worker thread and worker queue shared across
> multiple
> > > virtio devices
> > > We would like to modify the patch posted in
> > > https://github.com/abelg/virtual_io_acceleration/commit/
> > 3dc6a3ce7bcbe87363c2df8a6b6fee0c14615766
> > > to limit a vhost thread to serve multiple devices only if they belong
> to
> > > the same VM as Paolo suggested to avoid isolation or cgroups concerns.
> > >
> > > Another modification is related to the creation and removal of vhost
> > > threads, which will be discussed next.
> > >
> > > 2. Sysfs mechanism to add and remove vhost threads
> > > This patch allows us to add and remove vhost threads dynamically.
> > >
> > > A simpler way to control the creation of vhost threads is statically
> > > determining the maximum number of virtio devices per worker via a
> kernel
> > > module parameter (which is the way the previously mentioned patch is
> > > currently implemented)
> >
> > Any chance we can re-use the cwmq instead of inventing another
> > mechanism? Looks like there're lots of function duplication here. Bandan
> > has an RFC to do this.
> 
> Thanks for the suggestion. We should certainly take a look at Bandan's
> patches which I guess are:
> 
> http://www.mail-archive.com/kvm@xxxxxxxxxxxxxxx/msg96603.html
> 
> My only concern here is that we may not be able to easily implement
> our polling mechanism and heuristics with cwmq.

It's not so hard, to poll you just requeue work to make sure it's
re-invoked.

> > >
> > > I'd like to ask for advice here about the more preferable way to go:
> > > Although having the sysfs mechanism provides more flexibility, it may
> be a
> > > good idea to start with a simple static parameter, and have the first
> > > patches as simple as possible. What do you think?
> > >
> > > 3.Add virtqueue polling mode to vhost
> > > Have the vhost thread poll the virtqueues with high I/O rate for new
> > > buffers , and avoid asking the guest to kick us.
> > > https://github.com/abelg/virtual_io_acceleration/commit/
> > 26616133fafb7855cc80fac070b0572fd1aaf5d0
> >
> > Maybe we can make poll_stop_idle adaptive which may help the light load
> > case. Consider guest is often slow than vhost, if we just have one or
> > two vms, polling too much may waste cpu in this case.
> 
> Yes, make polling adaptive based on the amount of wasted cycles (cycles
> we did polling but didn't find new work) and I/O rate is a very good idea.
> Note we already measure and expose these values but we do not use them
> to adapt the polling mechanism.
> 
> Having said that, note that adaptive polling may be a bit tricky.
> Remember that the cycles we waste polling in the vhost thread actually
> improves the performance of the vcpu threads because the guest is no longer
> 
> require to kick (pio==exit) the host when vhost does polling. So even if
> we waste cycles in the vhost thread, we are saving cycles in the
> vcpu thread and improving performance.


So my suggestion would be:

- guest runs some kicks
- measures how long it took, e.g. kick = T cycles
- sends this info to host

host polls for at most fraction * T cycles


> > > 4. vhost statistics
> > > This patch introduces a set of statistics to monitor different
> performance
> > > metrics of vhost and our polling and I/O scheduling mechanisms. The
> > > statistics are exposed using debugfs and can be easily displayed with a
> 
> > > Python script (vhost_stat, based on the old kvm_stats)
> > > https://github.com/abelg/virtual_io_acceleration/commit/
> > ac14206ea56939ecc3608dc5f978b86fa322e7b0
> >
> > How about using trace points instead? Besides statistics, it can also
> > help more in debugging.
> 
> Yep, we just had a discussion with Gleb about this :)
> 
> > >
> > > 5. Add heuristics to improve I/O scheduling
> > > This patch enhances the round-robin mechanism with a set of heuristics
> to
> > > decide when to leave a virtqueue and proceed to the next.
> > > https://github.com/abelg/virtual_io_acceleration/commit/
> > f6a4f1a5d6b82dc754e8af8af327b8d0f043dc4d
> > >
> > > This patch improves the handling of the requests by the vhost thread,
> but
> > > could perhaps be delayed to a
> > > later time , and not submitted as one of the first Elvis patches.
> > > I'd love to hear some comments about whether this patch needs to be
> part
> > > of the first submission.
> > >
> > > Any other feedback on this plan will be appreciated,
> > > Thank you,
> > > Razya
> > >
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > > the body of a message to majordomo@xxxxxxxxxxxxxxx
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux