On 12/03/2010 12:20 PM, Chris Wright wrote:
* Anthony Liguori (anthony@xxxxxxxxxxxxx) wrote:
On 12/03/2010 11:58 AM, Chris Wright wrote:
* Srivatsa Vaddagiri (vatsa@xxxxxxxxxxxxxxxxxx) wrote:
On Fri, Dec 03, 2010 at 09:29:06AM -0800, Chris Wright wrote:
That's what Marcelo's suggestion does w/out a fill thread.
There's one complication though even with that. How do we compute the
real utilization of VM (given that it will appear to be burning 100% cycles)?
We need to have scheduler discount the cycles burnt post halt-exit, so more
stuff is needed than those simple 3-4 lines!
Heh, was just about to say the same thing ;)
My first reaction is that it's not terribly important to account the
non-idle time in the guest because of the use-case for this model.
Depends on the chargeback model. This would put guest vcpu runtime vs
host running guest vcpu time really out of skew. ('course w/out steal
and that time it's already out of skew). But I think most models are
more uptime based rather then actual runtime now.
Right. I'm not familiar with any models that are actually based on
CPU-consumption based accounting. In general, the feedback I've
received is that predictable accounting is pretty critical so I don't
anticipate something as volatile as CPU-consumption ever being something
that's explicitly charged for in a granular fashion.
Eventually, it might be nice to have idle time accounting but I
don't see it as a critical feature here.
Non-idle time simply isn't as meaningful here as it normally would
be. If you have 10 VMs in a normal environment and saw that you had
only 50% CPU utilization, you might be inclined to add more VMs.
Who is "you"? cloud user, or cloud service provider's scheduler?
On the user side, 50% cpu utilization wouldn't trigger me to add new
VMs. On the host side, 50% cpu utilization would have to be measure
solely in terms of guest vcpu count.
But if you're offering deterministic execution, it doesn't matter if
you only have "50%" utilization. If you add another VM, the guests
will get exactly the same impact as if they were using 100%
utilization.
Sorry, didn't follow here?
The question is, why would something care about host CPU utilization?
The answer I can think of is, something wants to measure host CPU
utilization to identify an underutilized node. One the underutilized
node is identified, more work can be given to it.
Adding more work to an underutilized node doesn't change the amount of
work that can be done. More concretely, one PCPU, four independent
VCPUs. They are consuming, 25%, 25%, 25%, 12% respectively. My
management software says, ah hah, I can stick a fifth VCPU on this box
that's only using 5%. The other VCPUs are unaffected.
However, in a no-yield-on-hlt model, if I have four VCPUs, they each get
25%, 25%, 25%, 25% on the host. Three of the VCPUs are running 100% in
the guest and one is running 50%.
If I add a fifth VCPU, even if it's only using 5%, each VCPU drops to
20%. That means the three VCPUS that are consuming 100% now see a 25%
drop in their performance even though you've added an idle guest.
Basically, the traditional view of density simply doesn't apply in this
model.
Regards,
Anthony Liguori
thanks,
-chris
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html