Jes Sorensen wrote: > Zhang, Xiantao wrote: >> Hi, Jes >> Currently, we only supports up to 8 vcpus, and to be safe, I set >> the limit to 4 (Defined in include/asm/kvm_host.h). Maybe you can >> increase the macro KVM_MAX_VCPUS to have a try. In order to support >> >8 vcpus, we can decrease the size of vtlb and vhpt, also defined in >> include/asm/kvm_host.h. Thanks! :0 > Hi Xiantao, > I already increased this value, or it wouldn't have accepted my > 4 > number :-) > > What is the reason for the max 8 limitation? In current implementation, we allocated 16M data area(Mapped by one pair of TR in guest mode) for each VM to hold p2m table, vcpus's vtlb, vhpt, vpd and so on. 8M p2m table only can hold 1M 's p2m entries(8 bytes per entry), so the max memory should be 64G for each guests. Since one vcpu should be configured with one vhpt, vtlb and vpd, so only limited vcpus are supported. To break the limit, we have two ways, one is to increase the data area to 64M or larger, and the other is to decrease the vtlb and vhpt's size. Anyway, the best way to is to determin the size of data area allocated for guests according to the number of vcpus and the size of guest memory dynamically. BTW, I have tried to boot guests with 16 vcpus before, and didn't meet unstable issue. >At some point I will > definately need to do something about that ..... 256 will be my > minimum target :-) That should be an easy thing, because I didn't see the hurdles which can block us to reach this goal. :) Xiantao -- To unsubscribe from this list: send the line "unsubscribe kvm-ia64" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html