>>> >>> With KVM enabled it bails out with: >>> qemu-system-x86_64: kvm_set_user_memory_region: KVM_SET_USER_MEMORY_REGION failed, slot=1, start=0x100000000, size=0x8ff40000000: Invalid argument >>> >>> all of that on a host with 32G of RAM/no swap. >>> >>> >> >> #define KVM_MEM_MAX_NR_PAGES ((1UL << 31) - 1) >> >> ~8 TiB (7,999999) > > so essentially that's the our max for initial RAM > (ignoring initial RAM slots before 4Gb) > > Are you aware of any attempts to make it larger? Not really, I think for now only s390x had applicable machines where you'd have that much memory on a single NUMA node. > > But can we use extra pc-dimm devices for additional memory (with 8TiB limit) > as that will use another memslot? I remember that was the workaround for now for some extremely large VMs where you'd want a single NUMA node or a lot of memory for a single NUMA node. > > >> >> In QEMU, we have >> >> static hwaddr kvm_max_slot_size = ~0; >> >> And only s390x sets >> >> kvm_set_max_memslot_size(KVM_SLOT_MAX_BYTES); >> >> with >> >> #define KVM_SLOT_MAX_BYTES (4UL * TiB) > in QEMU default value is: > static hwaddr kvm_max_slot_size = ~0 > it is kernel side that's failing ... and kvm_set_max_memslot_size(KVM_SLOT_MAX_BYTES) works around the kernel limitation for s390x in user space. I feel like the right thing would be to look into increasing the limit in the kernel, and bail out if the kernel doesn't support it. Would require a new kernel for starting gigantic VMs with a single large memory backend, but then, it's a new use case. -- Thanks, David / dhildenb