On 15-06-2020 4:49, Manuel Wolfshant wrote: [...]
The testing machines are IBM blades, model H21 and H21XM. Initial tests were performed on the H21 with 16 GB RAM; during the last 6=7 weeks I've been using the H21XM with 64 GB. In all cases the guests were fully updated CentOS 7 -- initially 7.6 ( most recent at the time of the initial tests ), and respectively 7.8 for the tests performed during the last 2 months. As host I used initially CentOS 6 with latest kernel available in the centos virt repo at the time of the tests and CentOS 7 with the latest kernel as well. As xen versions I tested 4.8 and 4.12 ( xl info included below ). The storage for the last tests is a Crucial MX500 but results were similar when using traditional HDD. My problem, in short, is that the guests are extremely slow. For instance , in the most recent tests, a yum install kernel takes cca 1 min on the host and 12-15 (!!!) minutes in the guest, all time being spent in dracut regenerating the initramfs images. I've done rough tests with the storage ( via dd if=/dev/zero of=a_test_file size bs=10M count=1000 ) and the speed was comparable between the hosts and the guests. The version of the kernel in use inside the guest also did not seem to make any difference . OTOH, sysbench ( https://github.com/akopytov/sysbench/ ) as well as p7zip benchmark report for the guests a speed which is between 10% and 50% of the host. Quite obviously, changing the elevator had no influence either. Here is the info which I think that should be relevant for the software versions in use. Feel free to ask for any additional info. [root@t7 ~]# xl info host : t7 release : 4.9.215-36.el7.x86_64 version : #1 SMP Mon Mar 2 11:42:52 UTC 2020 machine : x86_64 nr_cpus : 8 max_cpu_id : 7 nr_nodes : 1 cores_per_socket : 4 threads_per_core : 1 cpu_mhz : 3000.122 hw_caps : bfebfbff:000ce3bd:20100800:00000001:00000000:00000000:00000000:00000000 virt_caps : pv hvm total_memory : 57343 free_memory : 53620 sharing_freed_memory : 0 sharing_used_memory : 0 outstanding_claims : 0 free_cpus : 0 xen_major : 4 xen_minor : 12 xen_extra : .2.39.g3536f8dc xen_version : 4.12.2.39.g3536f8dc xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 xen_scheduler : credit2 xen_pagesize : 4096 platform_params : virt_start=0xffff800000000000 xen_changeset : xen_commandline : placeholder dom0_mem=1024M,max:1024M cpuinfo com1=115200,8n1 console=com1,tty loglvl=all guest_loglvl=all ucode=-1 cc_compiler : gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) cc_compile_by : mockbuild cc_compile_domain : centos.org cc_compile_date : Tue Apr 14 14:22:04 UTC 2020 build_id : 24148a191438467f26a9e16089205544a428f661 xend_config_format : 4 [root@t5 ~]# xl info host : t5 release : 4.9.215-36.el6.x86_64 version : #1 SMP Mon Mar 2 10:30:40 UTC 2020 machine : x86_64 nr_cpus : 8 max_cpu_id : 7 nr_nodes : 1 cores_per_socket : 4 threads_per_core : 1 cpu_mhz : 2000 hw_caps : b7ebfbff:0004e33d:20100800:00000001:00000000:00000000:00000000:00000000 virt_caps : hvm total_memory : 12287 free_memory : 6955 sharing_freed_memory : 0 sharing_used_memory : 0 outstanding_claims : 0 free_cpus : 0 xen_major : 4 xen_minor : 8 xen_extra : .5.86.g8db85532 xen_version : 4.8.5.86.g8db85532 xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 xen_scheduler : credit xen_pagesize : 4096 platform_params : virt_start=0xffff800000000000 xen_changeset : xen_commandline : dom0_mem=1024M,max:1024M cpuinfo com1=115200,8n1 console=com1,tty loglvl=all guest_loglvl=all cc_compiler : gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-23) cc_compile_by : mockbuild cc_compile_domain : centos.org cc_compile_date : Thu Dec 12 14:34:48 UTC 2019 build_id : da34ae5b90c82137dcbc466cd66322381bc6fd21 xend_config_format : 4 _Note:_ with all other kernels and xen versions that were published for C6 during the last year, the performance was the same, i.e. slow The test VM is exactly the same, copied among servers: [root@t7 ~]# cat /etc/xen/test7_1 builder = "hvm" xen_platform_pci=1 name = "Test7" memory = 2048 maxmem = 4096 vcpus = 2 vif = [ "mac=00:14:5e:d9:df:50,bridge=xenbr0,model=e1000" ] disk = [ "file:/var/lib/xen/images/test7_1,xvda,w" [1] ] sdl = 0 vnc = 1 bootloader = 'xenpvnetboot' #bootloader_args = ['--location', 'http://internal.x.y/mrepo/centos7-x86_64/disc1/'] on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' #boot="nd" boot="d" pae=1 acpi=1 apic=1 tsc_mode=0 _Notes:_ - the lines past "boot" in the config do not make any difference either, they were added during the last week's tests. - I've tested with 1, 2, 4 and 8 VCPUs . There is no diff for the real life apps.
Wolfy, to begin with, can you try the kernel-xen package from https://xen.crc.id.au/support/guides/install/ with the CPU vulnerability mitigations turned off for both dom0 and domU?
-- Adi Pircalabu _______________________________________________ CentOS-virt mailing list CentOS-virt@xxxxxxxxxx https://lists.centos.org/mailman/listinfo/centos-virt