Hi Zenghui, On 8/27/19 9:49 AM, Zenghui Yu wrote: > Hi Eric, > > Thanks for this patch! > > On 2019/8/24 1:52, Auger Eric wrote: >> Hi Zenghui, Marc, >> >> On 8/23/19 7:33 PM, Eric Auger wrote: >>> At the moment we use 2 IO devices per GICv3 redistributor: one > ^^^ >>> one for the RD_base frame and one for the SGI_base frame. > ^^^ >>> >>> Instead we can use a single IO device per redistributor (the 2 >>> frames are contiguous). This saves slots on the KVM_MMIO_BUS >>> which is currently limited to NR_IOBUS_DEVS (1000). >>> >>> This change allows to instantiate up to 512 redistributors and may >>> speed the guest boot with a large number of VCPUs. >>> >>> Signed-off-by: Eric Auger <eric.auger@xxxxxxxxxx> >> >> I tested this patch with below kernel and QEMU branches: >> kernel: https://github.com/eauger/linux/tree/256fix-v1 >> (Marc's patch + this patch) >> https://github.com/eauger/qemu/tree/v4.1.0-256fix-rfc1-rc0 >> (header update + kvm_arm_gic_set_irq modification) > > I also tested these three changes on HiSi D05 (with 64 pcpus), and yes, > I can get a 512U guest to boot properly now. Many thanks for the testing (and the bug report). I will formally post the QEMU changes asap. Thanks Eric > > Tested-by: Zenghui Yu <yuzenghui@xxxxxxxxxx> > >> On a machine with 224 pcpus, I was able to boot a 512 vcpu guest. >> >> As expected, qemu outputs warnings: >> >> qemu-system-aarch64: warning: Number of SMP cpus requested (512) exceeds >> the recommended cpus supported by KVM (224) >> qemu-system-aarch64: warning: Number of hotpluggable cpus requested >> (512) exceeds the recommended cpus supported by KVM (224) >> >> on the guest: getconf _NPROCESSORS_ONLN returns 512 >> >> Then I have no clue about what can be expected of such overcommit config >> and I have not further exercised the guest at the moment. But at least >> it seems to boot properly. I also tested without overcommit and it seems >> to behave as before (boot, migration). >> >> I still need to look at the migration of > 256vcpu guest at qemu level. > > Let us know if further tests are needed. > > > Thanks, > zenghui > _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm