Re: [RFC PATCH 0/4] cgroup aware workqueues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Bandan,

>  Bandan Das <bsd@xxxxxxxxxx> wrote on 03/31/2016 09:45:43 PM:
> >
> >> > > opportunity for optimization, at least for some workloads...
> >> > 
> >> > What sort of optimizations are we talking about?
> >> 
> >> Well, if we take Evlis (1) as for the theoretical base, there could 
be 
> >> benefit of doing I/O scheduling inside the vhost.
> >
> > Yeah, if that actually is beneficial, take full control of the
> > kworker thread.
> 
> Well, even if it actually is beneficial (which I am sure it is), it 
seems a
> little impractical to block current improvements based on a future 
prospect
> that (as far as I know), no one is working on ?

I'm not suggesting to block current improvements based on a future 
prospect. But, unfortunately, there's regression rather than improvement 
with the results you've posted.

And, I thought you are working on comparing different approaches to vhost 
threading, like workqueues and shared vhost thread (1) ;-)
Anyway, I'm working on this in a background, and, frankly, I cannot say I 
have a clear vision of the best route.
 
> There have been discussions about this in the past and iirc, most people 
agree
> about not going the byos* route. But I am still all for such a proposal 
and if
> it's good/clean enough, I think we can definitely tear down what we have 
and
> throw it away! The I/O scheduling part is intrusive enough that even the 
current
> code base has to be changed quite a bit.

The "byos" route seems more promising with respect to possible performance 
gains, but it will definitely add complexity, and I cannot say if the 
added complexity will be worth performance improvements.

Meanwhile, I'd suggest we better understand what causes regression with 
your current patches and maybe then we'll be smarter to get to the right 
direction. :)
 
> *byos = bring your own scheduling ;)
> 
> > Thanks.

--
Sincerely yours,
Mike.

[1] https://lwn.net/Articles/650857/ 


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux