On Thu, Nov 28, 2013 at 09:31:50AM +0200, Abel Gordon wrote: > > > Stefan Hajnoczi <stefanha@xxxxxxxxx> wrote on 27/11/2013 05:00:53 PM: > > > On Wed, Nov 27, 2013 at 09:43:33AM +0200, Joel Nider wrote: > > > Hi, > > > > > > Razya is out for a few days, so I will try to answer the questions as > well > > > as I can: > > > > > > "Michael S. Tsirkin" <mst@xxxxxxxxxx> wrote on 26/11/2013 11:11:57 PM: > > > > > > > From: "Michael S. Tsirkin" <mst@xxxxxxxxxx> > > > > To: Abel Gordon/Haifa/IBM@IBMIL, > > > > Cc: Anthony Liguori <anthony@xxxxxxxxxxxxx>, abel.gordon@xxxxxxxxx, > > > > asias@xxxxxxxxxx, digitaleric@xxxxxxxxxx, Eran Raichstein/Haifa/ > > > > IBM@IBMIL, gleb@xxxxxxxxxx, jasowang@xxxxxxxxxx, Joel Nider/Haifa/ > > > > IBM@IBMIL, kvm@xxxxxxxxxxxxxxx, pbonzini@xxxxxxxxxx, Razya Ladelsky/ > > > > Haifa/IBM@IBMIL > > > > Date: 27/11/2013 01:08 AM > > > > Subject: Re: Elvis upstreaming plan > > > > > > > > On Tue, Nov 26, 2013 at 08:53:47PM +0200, Abel Gordon wrote: > > > > > > > > > > > > > > > Anthony Liguori <anthony@xxxxxxxxxxxxx> wrote on 26/11/2013 > 08:05:00 > > > PM: > > > > > > > > > > > > > > > > > Razya Ladelsky <RAZYA@xxxxxxxxxx> writes: > > > > > > > > > <edit> > > > > > > > > > > That's why we are proposing to implement a mechanism that will > enable > > > > > the management stack to configure 1 thread per I/O device (as it is > > > today) > > > > > or 1 thread for many I/O devices (belonging to the same VM). > > > > > > > > > > > Once you are scheduling multiple guests in a single vhost device, > you > > > > > > now create a whole new class of DoS attacks in the best case > > > scenario. > > > > > > > > > > Again, we are NOT proposing to schedule multiple guests in a single > > > > > vhost thread. We are proposing to schedule multiple devices > belonging > > > > > to the same guest in a single (or multiple) vhost thread/s. > > > > > > > > > > > > > I guess a question then becomes why have multiple devices? > > > > > > If you mean "why serve multiple devices from a single thread" the > answer is > > > that we cannot rely on the Linux scheduler which has no knowledge of > I/O > > > queues to do a decent job of scheduling I/O. The idea is to take over > the > > > I/O scheduling responsibilities from the kernel's thread scheduler with > a > > > more efficient I/O scheduler inside each vhost thread. So by combining > all > > > of the I/O devices from the same guest (disks, network cards, etc) in a > > > single I/O thread, it allows us to provide better scheduling by giving > us > > > more knowledge of the nature of the work. So now instead of relying on > the > > > linux scheduler to perform context switches between multiple vhost > threads, > > > we have a single thread context in which we can do the I/O scheduling > more > > > efficiently. We can closely monitor the performance needs of each > queue of > > > each device inside the vhost thread which gives us much more > information > > > than relying on the kernel's thread scheduler. > > > > And now there are 2 performance-critical pieces that need to be > > optimized/tuned instead of just 1: > > > > 1. Kernel infrastructure that QEMU and vhost use today but you decided > > to bypass. > > We are NOT bypassing existing components. We are just changing the > threading > model: instead of having one vhost-thread per virtio device, we propose to > use > 1 vhost thread to server devices belonging to the same VM. In addition, we > propose to add new features such as polling. What I meant with "bypassing" is that reducing scope to single VMs leaves multi-VM performance unchanged. I know the original aim was to improve multi-VM performance too and I hope that will be possible by extending the current approach. Stefan -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html