Hi, some custom CPU models are reported from virConnectGetDomainCapabilities as usable='yes' on a physical machine while as usable='no' inside a VM running on the same machine. That's not completely surprising. But what surprises me is that those models are still reported from virConnectCompareCPU as supported (VIR_CPU_COMPARE_SUPERSET) in the nested environment and VMs can be started happily with them. For instance, virConnectGetDomainCapabilities reports <model usable='no'>Skylake-Client</model> but when I try to use that model anyway, the VM starts fine with it: <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>Skylake-Client</model> <topology sockets='16' cores='1' threads='1'/> <feature policy='require' name='hypervisor'/> <feature policy='disable' name='invpcid'/> <numa> <cell id='0' cpus='0' memory='524288' unit='KiB'/> </numa> </cpu> That's actually good news, but unexpected. Do I miss something? Thanks, Milan _______________________________________________ libvirt-users mailing list libvirt-users@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/libvirt-users