On 05/30/2011 02:49 PM, Ingo Molnar wrote:
* Avi Kivity<avi@xxxxxxxxxx> wrote: > On 05/30/2011 02:26 PM, Takuya Yoshikawa wrote: > >> > > >> qemu also allows having more VCPUs than cores. > > > >I have to check again, then :) Thank you! > >I will try both with many VCPUs. > > Note, with cpu overcommit the results are going to be bad. And that is good: if pushed hard enough it will trigger exciting (or obscure) bugs in the guest kernel much faster than if there's no overcommit, so it's rather useful for testing. (We made that surprising experience with -rt.) Also, such simulation would be very obviously useful if you get bugreports about 1024 or 4096 CPUs, like i do sometimes! :-) [*]
I'll be surprised if 1024 cpus actually boot on a reasonable machine. Without PLE support, any busy wait (like smp_call_function_single()) turns into a delay the length scheduler time slice (or CFS's unfairness measure, I forget how it's called) - 3 or 4 orders of magnitude larger. Even with PLE, it's significantly slower, plus 1-2 orders of magnitude loss from the overcommit itself.
[*] Would be nice if tools/kvm/ had a debug option to simulate 'lots of RAM' as well somehow - perhaps by not pre-initializing it and somehow catching all-zeroes pages and keeping them all zeroes and shared? It would obviously OOM after some time but would allow me to at least boot a fair deal of userspace. The motivation is that i have recently received a 1 TB RAM bugreport. 1 TB of RAM mapped with 2MB mappings should still be able to boot to shell on a 32 GB RAM testbox of mine, and survive there for some time. We could even do some kernel modifications to make this kind of simulation easier.
This should work fine - I just booted a 128GB guest on a 4GB host. Just set overcommit_accounting to 1 (and disable transparent hugepages, though the kernel mostly allocates contiguous memory).
-- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html