Re: Fw: Benchmarking for vhost polling patch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > 
> > Our suggestion would be to use the maximum (a large enough) value,
> > so that vhost is polling 100% of the time.
> >
> > The polling optimization mainly addresses users who want to maximize 
their 
> > performance, even on the expense of wasting cpu cycles. The maximum 
value 
> > will produce the biggest impact on performance.
> 
> *Everyone* is interested in getting maximum performance from
> their systems.
> 

Maybe so, but not everyone is willing to pay the price.
That is also the reason why this optimization should not be enabled by 
default. 

> > However, using the maximum default value will be valuable even for 
users 
> > who care more about the normalized throughput/cpu criteria. Such 
users, 
> > interested in a finer tuning of the polling timeout need to look for 
an 
> > optimal timeout value for their system. The maximum value serves as 
the 
> > upper limit of the range that needs to be searched for such optimal 
> > timeout value.
> 
> Number of users who are going to do this kind of tuning
> can be counted on one hand.
> 

If the optimization is not enabled by default, the default value is almost 
irrelevant, because when users turn on the feature they should understand 
that there's an associated cost and they have to tune their system if they 
want to get the maximum benefit (depending how they define their maximum 
benefit).
The maximum value is a good starting point that will work in most cases 
and can be used to start the tuning. 

> > 
> > > There are some cases where networking stack already
> > > exposes low-level hardware detail to userspace, e.g.
> > > tcp polling configuration. If we can't come up with
> > > a way to abstract hardware, maybe we can at least tie
> > > it to these existing controls rather than introducing
> > > new ones?
> > > 
> > 
> > We've spent time thinking about the possible interfaces that 
> > could be appropriate for such an optimization(including tcp polling).
> > We think that using the ioctl as interface to "configure" the virtual 
> > device/vhost, 
> > in the same manner that e.g. SET_NET_BACKEND is configured, makes a 
lot of 
> > sense, and
> > is consistent with the existing mechanism. 
> > 
> > Thanks,
> > Razya
> 
> guest is giving up it's share of CPU for benefit of vhost, right?
> So maybe exposing this to guest is appropriate, and then
> add e.g. an ethtool interface for guest admin to set this.
> 

The decision making of whether to turn polling on (and with what rate)
should be made by the system administrator, who has a broad view of the 
system and workload, and not by the guest administrator.
Polling should be a tunable parameter from the host side, the guest should 
not be aware of it.
The guest is not necessarily giving up its time. It may be that there's 
just an extra dedicated core or free cpu cycles on a different cpu.
We provide a mechanism and an interface that can be tuned by some other 
program to implement its policy.
This patch is all about the mechanism and not the policy of how to use it.

Thank you,
Razya 

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux