On 09/02/2012 09:59 PM, Rik van Riel wrote:
On 09/02/2012 06:12 AM, Gleb Natapov wrote:
On Thu, Aug 30, 2012 at 12:51:01AM +0530, Raghavendra K T wrote:
The idea of starting from next vcpu (source of yield_to + 1) seem to
work
well for overcomitted guest rather than using last boosted vcpu. We
can also
remove per VM variable with this approach.
Iteration for eligible candidate after this patch starts from vcpu
source+1
and ends at source-1 (after wrapping)
Thanks Nikunj for his quick verification of the patch.
Please let me know if this patch is interesting and makes sense.
This last_boosted_vcpu thing caused us trouble during attempt to
implement vcpu destruction. It is good to see it removed from this POV.
I like this implementation. It should achieve pretty much
the same as my old code, but without the downsides and without
having to keep the same amount of global state.
I able to test this on 3.6-rc5 (where I do not see inconsistency may be
it was my bad to go with rc1), with 32 guest 1x and 2x overcommit
scenario
Here is the result on 16 core ple machine (with HT 32 thread) x240
machine
base = 3.6-rc5 + ple handler improvement patch
patched = base + vcpuid usage patch
+-----------+-----------+-----------+------------+-----------+
ebizzy (records/sec higher is better)
+-----------+-----------+-----------+------------+-----------+
base stdev patched stdev %improve
+-----------+-----------+-----------+------------+-----------+
1x 11293.3750 624.4378 11242.8750 583.1757 -0.44716
2x 3641.8750 468.9400 4088.8750 290.5470 12.27390
+-----------+-----------+-----------+------------+-----------+
Avi, Marcelo.. any comments on this?
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html