Re: Elvis upstreaming plan

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Stefan Hajnoczi <stefanha@xxxxxxxxx> wrote on 27/11/2013 05:00:53 PM:

> On Wed, Nov 27, 2013 at 09:43:33AM +0200, Joel Nider wrote:
> > Hi,
> >
> > Razya is out for a few days, so I will try to answer the questions as
well
> > as I can:
> >
> > "Michael S. Tsirkin" <mst@xxxxxxxxxx> wrote on 26/11/2013 11:11:57 PM:
> >
> > > From: "Michael S. Tsirkin" <mst@xxxxxxxxxx>
> > > To: Abel Gordon/Haifa/IBM@IBMIL,
> > > Cc: Anthony Liguori <anthony@xxxxxxxxxxxxx>, abel.gordon@xxxxxxxxx,
> > > asias@xxxxxxxxxx, digitaleric@xxxxxxxxxx, Eran Raichstein/Haifa/
> > > IBM@IBMIL, gleb@xxxxxxxxxx, jasowang@xxxxxxxxxx, Joel Nider/Haifa/
> > > IBM@IBMIL, kvm@xxxxxxxxxxxxxxx, pbonzini@xxxxxxxxxx, Razya Ladelsky/
> > > Haifa/IBM@IBMIL
> > > Date: 27/11/2013 01:08 AM
> > > Subject: Re: Elvis upstreaming plan
> > >
> > > On Tue, Nov 26, 2013 at 08:53:47PM +0200, Abel Gordon wrote:
> > > >
> > > >
> > > > Anthony Liguori <anthony@xxxxxxxxxxxxx> wrote on 26/11/2013
08:05:00
> > PM:
> > > >
> > > > >
> > > > > Razya Ladelsky <RAZYA@xxxxxxxxxx> writes:
> > > > >
> > <edit>
> > > >
> > > > That's why we are proposing to implement a mechanism that will
enable
> > > > the management stack to configure 1 thread per I/O device (as it is
> > today)
> > > > or 1 thread for many I/O devices (belonging to the same VM).
> > > >
> > > > > Once you are scheduling multiple guests in a single vhost device,
you
> > > > > now create a whole new class of DoS attacks in the best case
> > scenario.
> > > >
> > > > Again, we are NOT proposing to schedule multiple guests in a single
> > > > vhost thread. We are proposing to schedule multiple devices
belonging
> > > > to the same guest in a single (or multiple) vhost thread/s.
> > > >
> > >
> > > I guess a question then becomes why have multiple devices?
> >
> > If you mean "why serve multiple devices from a single thread" the
answer is
> > that we cannot rely on the Linux scheduler which has no knowledge of
I/O
> > queues to do a decent job of scheduling I/O.  The idea is to take over
the
> > I/O scheduling responsibilities from the kernel's thread scheduler with
a
> > more efficient I/O scheduler inside each vhost thread.  So by combining
all
> > of the I/O devices from the same guest (disks, network cards, etc) in a
> > single I/O thread, it allows us to provide better scheduling by giving
us
> > more knowledge of the nature of the work.  So now instead of relying on
the
> > linux scheduler to perform context switches between multiple vhost
threads,
> > we have a single thread context in which we can do the I/O scheduling
more
> > efficiently.  We can closely monitor the performance needs of each
queue of
> > each device inside the vhost thread which gives us much more
information
> > than relying on the kernel's thread scheduler.
>
> And now there are 2 performance-critical pieces that need to be
> optimized/tuned instead of just 1:
>
> 1. Kernel infrastructure that QEMU and vhost use today but you decided
> to bypass.

We are NOT bypassing existing components. We are just changing the
threading
model: instead of having one vhost-thread per virtio device, we propose to
use
1 vhost thread to server devices belonging to the same VM. In addition, we
propose to add new features such as polling.

> 2. The new ELVIS code which only affects vhost devices in the same VM.

Also existent vhost code (or any other user-space back-end) should be
optimized/tuned if you care about performance.

>
> If you split the code paths it results in more effort in the long run
> and the benefit seems quite limited once you acknowledge that isolation
> is important.

Isolation is important but the question is what isolation means ?
I personally don't believe that 2 kernel threads provide more
isolation than 1 kernel threat that changes the mm (use_mm) and
avoids queue starvation.
Anyway, we propose to start with the simple approach (not sharing
threads across VMs) but once we show the value for this case we
can discuss if it makes sense to extend the approach and share
threads between different VMs.


> Isn't the sane thing to do taking lessons from ELVIS improving existing
> pieces instead of bypassing them?  That way both the single VM and
> host-wide performance improves.  And as a bonus non-virtualization use
> cases may also benefit.

The model we are proposing are specific for I/O virtualization... not sure
if they are applicable to bare-metal.

>
> Stefan
>

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux