Re: Elvis upstreaming plan

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




"Michael S. Tsirkin" <mst@xxxxxxxxxx> wrote on 27/11/2013 01:03:25 PM:

>
> On Wed, Nov 27, 2013 at 12:55:07PM +0200, Abel Gordon wrote:
> >
> >
> > "Michael S. Tsirkin" <mst@xxxxxxxxxx> wrote on 27/11/2013 12:29:43 PM:
> >
> > >
> > > On Wed, Nov 27, 2013 at 11:49:03AM +0200, Abel Gordon wrote:
> > > >
> > > >
> > > > "Michael S. Tsirkin" <mst@xxxxxxxxxx> wrote on 27/11/2013 11:21:00
AM:
> > > >
> > > > >
> > > > > On Wed, Nov 27, 2013 at 11:03:57AM +0200, Abel Gordon wrote:
> > > > > >
> > > > > >
> > > > > > "Michael S. Tsirkin" <mst@xxxxxxxxxx> wrote on 26/11/2013
11:11:57
> > PM:
> > > > > >
> > > > > > > On Tue, Nov 26, 2013 at 08:53:47PM +0200, Abel Gordon wrote:
> > > > > > > >
> > > > > > > >
> > > > > > > > Anthony Liguori <anthony@xxxxxxxxxxxxx> wrote on 26/11/2013
> > > > 08:05:00
> > > > > > PM:
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Razya Ladelsky <RAZYA@xxxxxxxxxx> writes:
> > > > > > > > >
> > > > > > > > > > Hi all,
> > > > > > > > > >
> > > > > > > > > > I am Razya Ladelsky, I work at IBM Haifa virtualization
> > team,
> > > > which
> > > > > > > > > > developed Elvis, presented by Abel Gordon at the last
KVM
> > > > forum:
> > > > > > > > > > ELVIS video:
https://www.youtube.com/watch?v=9EyweibHfEs
> > > > > > > > > > ELVIS slides:
> > > > > > > >
https://drive.google.com/file/d/0BzyAwvVlQckeQmpnOHM5SnB5UVE
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > According to the discussions that took place at the
forum,
> > > > > > upstreaming
> > > > > > > > > > some of the Elvis approaches seems to be a good idea,
which
> > we
> > > > > > would
> > > > > > > > like
> > > > > > > > > > to pursue.
> > > > > > > > > >
> > > > > > > > > > Our plan for the first patches is the following:
> > > > > > > > > >
> > > > > > > > > > 1.Shared vhost thread between mutiple devices
> > > > > > > > > > This patch creates a worker thread and worker queue
shared
> > > > across
> > > > > > > > multiple
> > > > > > > > > > virtio devices
> > > > > > > > > > We would like to modify the patch posted in
> > > > > > > > > >
https://github.com/abelg/virtual_io_acceleration/commit/
> > > > > > > > > 3dc6a3ce7bcbe87363c2df8a6b6fee0c14615766
> > > > > > > > > > to limit a vhost thread to serve multiple devices only
if
> > they
> > > > > > belong
> > > > > > > > to
> > > > > > > > > > the same VM as Paolo suggested to avoid isolation or
> > cgroups
> > > > > > concerns.
> > > > > > > > > >
> > > > > > > > > > Another modification is related to the creation and
removal
> > of
> > > > > > vhost
> > > > > > > > > > threads, which will be discussed next.
> > > > > > > > >
> > > > > > > > > I think this is an exceptionally bad idea.
> > > > > > > > >
> > > > > > > > > We shouldn't throw away isolation without exhausting
every
> > other
> > > > > > > > > possibility.
> > > > > > > >
> > > > > > > > Seems you have missed the important details here.
> > > > > > > > Anthony, we are aware you are concerned about isolation
> > > > > > > > and you believe we should not share a single vhost thread
> > across
> > > > > > > > multiple VMs.  That's why Razya proposed to change the
patch
> > > > > > > > so we will serve multiple virtio devices using a single
vhost
> > > > thread
> > > > > > > > "only if the devices belong to the same VM". This series of
> > patches
> > > > > > > > will not allow two different VMs to share the same vhost
> > thread.
> > > > > > > > So, I don't see why this will be throwing away isolation
and
> > why
> > > > > > > > this could be a "exceptionally bad idea".
> > > > > > > >
> > > > > > > > By the way, I remember that during the KVM forum a similar
> > > > > > > > approach of having a single data plane thread for many
devices
> > > > > > > > was discussed....
> > > > > > > > > We've seen very positive results from adding threads.  We
> > should
> > > > also
> > > > > > > > > look at scheduling.
> > > > > > > >
> > > > > > > > ...and we have also seen exceptionally negative results
from
> > > > > > > > adding threads, both for vhost and data-plane. If you have
lot
> > of
> > > > idle
> > > > > > > > time/cores
> > > > > > > > then it makes sense to run multiple threads. But IMHO in
many
> > > > scenarios
> > > > > > you
> > > > > > > > don't have lot of idle time/cores.. and if you have them
you
> > would
> > > > > > probably
> > > > > > > > prefer to run more VMs/VCPUs....hosting a single SMP VM
when
> > you
> > > > have
> > > > > > > > enough physical cores to run all the VCPU threads and the
I/O
> > > > threads
> > > > > > is
> > > > > > > > not a
> > > > > > > > realistic scenario.
> > > > > > > >
> > > > > > > > That's why we are proposing to implement a mechanism that
will
> > > > enable
> > > > > > > > the management stack to configure 1 thread per I/O device
(as
> > it is
> > > > > > today)
> > > > > > > > or 1 thread for many I/O devices (belonging to the same
VM).
> > > > > > > >
> > > > > > > > > Once you are scheduling multiple guests in a single vhost
> > device,
> > > > you
> > > > > > > > > now create a whole new class of DoS attacks in the best
case
> > > > > > scenario.
> > > > > > > >
> > > > > > > > Again, we are NOT proposing to schedule multiple guests in
a
> > single
> > > > > > > > vhost thread. We are proposing to schedule multiple devices
> > > > belonging
> > > > > > > > to the same guest in a single (or multiple) vhost thread/s.
> > > > > > > >
> > > > > > >
> > > > > > > I guess a question then becomes why have multiple devices?
> > > > > >
> > > > > > I assume that there are guests that have multiple vhost devices
> > > > > > (net or scsi/tcm).
> > > > >
> > > > > These are kind of uncommon though.  In fact a kernel thread is
not a
> > > > > unit of isolation - cgroups supply isolation.
> > > > > If we had use_cgroups kind of like use_mm, we could thinkably
> > > > > do work for multiple VMs on the same thread.
> > > > >
> > > > >
> > > > > > We can also extend the approach to consider
> > > > > > multiqueue devices, so we can create 1 vhost thread shared for
all
> > the
> > > > > > queues,
> > > > > > 1 vhost thread for each queue or a few threads for multiple
queues.
> > We
> > > > > > could also share a thread across multiple queues even if they
do
> > not
> > > > belong
> > > > > > to the same device.
> > > > > >
> > > > > > Remember the experiments Shirley Ma did with the split
> > > > > > tx/rx ? If we have a control interface we could support both
> > > > > > approaches: different threads or a single thread.
> > > > >
> > > > >
> > > > > I'm a bit concerned about interface managing specific
> > > > > threads being so low level.
> > > > > What exactly is it that management knows that makes it
> > > > > efficient to group threads together?
> > > > > That host is over-committed so we should use less CPU?
> > > > > I'd like the interface to express that knowledge.
> > > > >
> > > >
> > > > We can expose information such as the amount of I/O being
> > > > handled for each queue, the amount of CPU cycles consumed for
> > > > processing the I/O, latency and more.
> > > > If we start with a simple mechanism that just enables the
> > > > feature we can later expose more information to implement a policy
> > > > framework that will be responsible for taking the decisions
> > > > (the orchestration part).
> > >
> > > What kind of possible policies do you envision?
> > > If we just react to load by balancing the work done,
> > > and when over-committed anyway, localize work so
> > > we get less IPIs, then this is not policy, this is the mechanism.
> >
> > (CCing Eyal Moscovici who is actually prototyping with multiple
> > policies and may want to join this thread)
> >
> > Starting with basic policies: we can use a single vhost thread
> > and create new vhost threads if it becomes saturated and there
> > are enough cpu cycles available in the system
> > or if the latency (how long the requests in the virtio queues wait
> > until they are handled) is too high.
> > We can merge threads if the latency is already low or if the threads
> > are not saturated.
> >
> > There is a hidden trade-off here: when you run more vhost threads you
> > may actually be stealing cpu cycles from the vcpu threads and also
> > increasing context switches. So, from the vhost perspective it may
> > improve performance but from the vcpu threads perspective it may
> > degrade performance.
>
> So this is a very interesting problem to solve but what does
> management know that suggests it can solve it better?

Yep, and Eyal is currently working on this.
What the management knows ? depends who the management is :)
Could be just I/O activity (black-box: I/O request rate, I/O
handling rate, latency) or application performance (white-box).

>
> > >
> > >
> > > >
> > > > > > >
> > > > > > >
> > > > > > > > >
> > > > > > > > > > 2. Sysfs mechanism to add and remove vhost threads
> > > > > > > > > > This patch allows us to add and remove vhost threads
> > > > dynamically.
> > > > > > > > > >
> > > > > > > > > > A simpler way to control the creation of vhost threads
is
> > > > > > statically
> > > > > > > > > > determining the maximum number of virtio devices per
worker
> > via
> > > > a
> > > > > > > > kernel
> > > > > > > > > > module parameter (which is the way the previously
mentioned
> > > > patch
> > > > > > is
> > > > > > > > > > currently implemented)
> > > > > > > > > >
> > > > > > > > > > I'd like to ask for advice here about the more
preferable
> > way
> > > > to
> > > > > > go:
> > > > > > > > > > Although having the sysfs mechanism provides more
> > flexibility,
> > > > it
> > > > > > may
> > > > > > > > be a
> > > > > > > > > > good idea to start with a simple static parameter, and
have
> > the
> > > > > > first
> > > > > > > > > > patches as simple as possible. What do you think?
> > > > > > > > > >
> > > > > > > > > > 3.Add virtqueue polling mode to vhost
> > > > > > > > > > Have the vhost thread poll the virtqueues with high I/O
> > rate
> > > > for
> > > > > > new
> > > > > > > > > > buffers , and avoid asking the guest to kick us.
> > > > > > > > > >
https://github.com/abelg/virtual_io_acceleration/commit/
> > > > > > > > > 26616133fafb7855cc80fac070b0572fd1aaf5d0
> > > > > > > > >
> > > > > > > > > Ack on this.
> > > > > > > >
> > > > > > > > :)
> > > > > > > >
> > > > > > > > Regards,
> > > > > > > > Abel.
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Regards,
> > > > > > > > >
> > > > > > > > > Anthony Liguori
> > > > > > > > >
> > > > > > > > > > 4. vhost statistics
> > > > > > > > > > This patch introduces a set of statistics to monitor
> > different
> > > > > > > > performance
> > > > > > > > > > metrics of vhost and our polling and I/O scheduling
> > mechanisms.
> > > > The
> > > > > > > > > > statistics are exposed using debugfs and can be easily
> > > > displayed
> > > > > > with a
> > > > > > > >
> > > > > > > > > > Python script (vhost_stat, based on the old kvm_stats)
> > > > > > > > > >
https://github.com/abelg/virtual_io_acceleration/commit/
> > > > > > > > > ac14206ea56939ecc3608dc5f978b86fa322e7b0
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > 5. Add heuristics to improve I/O scheduling
> > > > > > > > > > This patch enhances the round-robin mechanism with a
set of
> > > > > > heuristics
> > > > > > > > to
> > > > > > > > > > decide when to leave a virtqueue and proceed to the
next.
> > > > > > > > > >
https://github.com/abelg/virtual_io_acceleration/commit/
> > > > > > > > > f6a4f1a5d6b82dc754e8af8af327b8d0f043dc4d
> > > > > > > > > >
> > > > > > > > > > This patch improves the handling of the requests by the
> > vhost
> > > > > > thread,
> > > > > > > > but
> > > > > > > > > > could perhaps be delayed to a
> > > > > > > > > > later time , and not submitted as one of the first
Elvis
> > > > patches.
> > > > > > > > > > I'd love to hear some comments about whether this patch
> > needs
> > > > to be
> > > > > > > > part
> > > > > > > > > > of the first submission.
> > > > > > > > > >
> > > > > > > > > > Any other feedback on this plan will be appreciated,
> > > > > > > > > > Thank you,
> > > > > > > > > > Razya
> > > > > > > > >
> > > > > > >
> > > > >
> > >
>

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux