On 16.04.2012, at 14:13, Paul Mackerras wrote: > On Mon, Apr 16, 2012 at 11:45:44AM +0200, Alexander Graf wrote: > >> While trying to trace down why some BookE systems were only able to >> do as many guest vcpus as there were host cpus available, we >> stumbled over this one. Is there any limitation on book3s_hv that >> would limit the available vcpus to configured host vcpus? Or could >> we just make this a static define like on x86? > > There is no limitation. I did it like that so that we would be able > to have a large number of vcpus on kernels configured for large > systems, while not using up large amounts of memory if the kernel is > configured for a small system. The memory consumption is 8 bytes per > vcore in each struct kvm if book3s_hv is configured. 8 * 256 = 2048. So for a limit similar to that of x86 we'd waste 2kb per VM. Doesn't sound all too horrible to me. > We can make it a fixed constant if you like, but then the question is > how do you choose that constant so as to allow us to have many vcpus > on large systems but still not waste too much memory on small systems. > Or it could be max(N, NR_CPUS) for a suitable N (e.g. 16 or 32). Hrm. Usually we have 2 machine types: 1) Small system with very low NR_CPUS. These systems should be able to do some overcommit at least. I'd go with a static number here, like 16 or 64. 2) Big systems with very high NR_CPUS. Here going larger than NR_CPUS doesn't make all that much sense anymore. I'd use NR_CPUS as limit. So how about something like #if NR_CPUS > 64 #define KVM_MAX_VCPUS NR_CPUS #else #define KVM_MAX_VCPUS 64 #endif That way everyone should be happy and we have a reasonable limit. Alex -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html