Jes Sorensen wrote: > Zhang, Xiantao wrote: >>> I have located a 512 cpu / 1 TB system in-house that I might get my >>> hands on at some point to run tests on, but I need to work on the >>> qemu startup times first. It took well over 20 minutes for qemu to >>> get going before anything really happened. >> Hi, Jes >> How long does it cost from efi shell to Linux's login interface >> ? Currently, we allocates so large memory for guests, kvm has to >> pin the corresponding pages in p2m table other than allocate them >> on-demand, so it may cost long time to allocate every page from >> kernel, and fill them into p2m table. Once we support host-swapping >> later, the issue should disappear. But anyway, it can't lead into >> any performance issue after bootup. To support larger memory than >> 384G, we may allocate contiguous huge pages, such as 16M, 256M pages >> for guest, if so, it may save many p2m entries because one entry can >> stand for 16M or 256M, so larger memory gets supported finally. >> Xiantao > > Hi Xiantao, > > From EFI to Linux's login wasn't bad, I didn't notice it being much > slower than on real hardware. The big issue was from QEMU until EFI, > it took probably 20 minutes or more before I started getting any of > the debug information from the firmware image on the console :-( I > think what is happening right now is something in qemu being really > slow. > > It's end of day here for me, but I hope to look at it tomorrow. Hi, Jes I found the reason why qemu only supports 16 vcpus. In libkvm/kvm-common.h, the MAX_CPUS is defined as 16 for ia64 side. We should increase this value to 64 or big value you like. Please have a try on your mainframe. Thanks! Xiantao -- To unsubscribe from this list: send the line "unsubscribe kvm-ia64" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html