Jes Sorensen wrote: > Zhang, Xiantao wrote: >>> What is the reason for the max 8 limitation? >> In current implementation, we allocated 16M data area(Mapped by one >> pair of TR in guest mode) for each VM to hold p2m table, vcpus's >> vtlb, vhpt, vpd and so on. 8M p2m table only can hold 1M 's p2m >> entries(8 bytes per entry), so the max memory should be 64G for each >> guests. Since one vcpu should be configured with one vhpt, vtlb and >> vpd, so only limited vcpus are supported. To break the limit, we >> have two ways, one is to increase the data area to 64M or larger, >> and the other is to decrease the vtlb and vhpt's size. Anyway, the >> best way to is to determin the size of data area allocated for >> guests according to the number of vcpus and the size of guest memory >> dynamically. BTW, I have tried to boot guests with 16 vcpus before, >> and didn't meet unstable issue. > > Hi Xiantao, > > Just to make sure I understand this right - this 16M area is what is > being allocated into kvm_vmm_base and mapped at KVM_VMM_BASE? From > what I see, the p2m table is basically the second half of the 16MB > mapping, is that correct? If so, it should be quite easy to allocate > a larger chunk if we need more than 64GB?? > > I am going to look into doing this then. Hi, Jes With attached patch, I can bootup 16 vcpus for guests, and it should support 384G guest memory and 64 vcpus, but I haven't tested it. To support >16 vcpus, seems we have to hack userspace, could you have a look ? Thanks Xiantao
Attachment:
cleanup.patch
Description: cleanup.patch