Re: ACPI timeouts when enabling KASAN

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 17, 2024 at 02:55:44PM +0200, Igor Mammedov wrote:
> On Tue, 16 Apr 2024 14:07:08 +0200
> Andrea Righi <andrea.righi@xxxxxxxxxxxxx> wrote:
> 
> > On Tue, Apr 16, 2024 at 01:36:40PM +0200, Ricardo Ribalda wrote:
> > > Hi Igor
> > > 
> > > On Tue, 16 Apr 2024 at 13:33, Igor Mammedov <imammedo@xxxxxxxxxx> wrote:  
> > > >
> > > > On Mon, 15 Apr 2024 16:18:22 +0200
> > > > Ricardo Ribalda <ribalda@xxxxxxxxxx> wrote:
> > > >  
> > > > > Hi Igor, Hi Rafael
> > > > >
> > > > > Yes, it seems that it is just KASAN being extremely slow.
> > > > > From a completely newbie here... Is there a reason why qemu generates
> > > > > the table vs returning a precomputed one?  
> > > >
> > > > it can be a pre-generated Package
> > > > like we do with ARM (example: acpi_dsdt_add_pci_route_table)
> > > >  
> > > > > This is the config file:
> > > > > https://gitlab.freedesktop.org/linux-media/media-ci/-/blob/main/testdata/virtme/virtme.config?ref_type=heads
> > > > >
> > > > > And this is the qemu cli:
> > > > >
> > > > > /usr/bin/qemu-system-x86_64 -m 4G -fsdev
> > > > > local,id=virtfs3,path=/,security_model=none,readonly=on,multidevs=remap
> > > > > -device virtio-9p-pci,fsdev=virtfs3,mount_tag=/dev/root -device
> > > > > i6300esb,id=watchdog0 -parallel none -net none -smp 2 -vga none
> > > > > -display none -serial chardev:console -chardev
> > > > > file,id=console,path=/proc/self/fd/2 -chardev
> > > > > stdio,id=stdin,signal=on,mux=off -device virtio-serial-pci -device
> > > > > virtserialport,name=virtme.stdin,chardev=stdin -chardev
> > > > > file,id=stdout,path=/proc/self/fd/1 -device virtio-serial-pci -device
> > > > > virtserialport,name=virtme.stdout,chardev=stdout -chardev
> > > > > file,id=stderr,path=/proc/self/fd/2 -device virtio-serial-pci -device
> > > > > virtserialport,name=virtme.stderr,chardev=stderr -chardev
> > > > > file,id=dev_stdout,path=/proc/self/fd/1 -device virtio-serial-pci
> > > > > -device virtserialport,name=virtme.dev_stdout,chardev=dev_stdout
> > > > > -chardev file,id=dev_stderr,path=/proc/self/fd/2 -device
> > > > > virtio-serial-pci -device
> > > > > virtserialport,name=virtme.dev_stderr,chardev=dev_stderr -chardev
> > > > > file,id=ret,path=/tmp/virtme_retefeobj4f -device virtio-serial-pci
> > > > > -device virtserialport,name=virtme.ret,chardev=ret -no-reboot -kernel
> > > > > ./arch/x86/boot/bzImage -append 'nr_open=1048576
> > > > > virtme_link_mods=/builds/linux-media/media-staging/.virtme_mods/lib/modules/0.0.0
> > > > > console=ttyS0 earlyprintk=serial,ttyS0,115200 panic=-1
> > > > > virtme.exec=`c2ggL21lZGlhLWNpL3Rlc3RkYXRhL3ZpcnRtZS90ZXN0LnNoIC9tZWRpYS1jaS90aGlyZF9wYXJ0eS92NGwtdXRpbHMgLTMy`
> > > > > virtme_root_user=1 rootfstype=9p
> > > > > rootflags=version=9p2000.L,trans=virtio,access=any raid=noautodetect
> > > > > ro init=/usr/lib/python3/dist-packages/virtme/guest/virtme-init'  
> > > >
> > > > boots fine for me on old Xeon E5-2630v3.
> > > > Perhaps issue is that your host is too slow,
> > > > is there reason not to use KVM instead of TCG?  
> > > 
> > > I am using a e2 instance that does not support nested virtualization :(
> > >   
> > > >
> > > > Alternatively you can try using q35 machine type
> > > > instead of default 'pc', it doesn't have _PRT in
> > > > simple configuration like yours.
> > > > But then running things that depend on time is not
> > > > reliable under TCG, so you might hit timeout elsewhere.  
> > > 
> > > I will give it a try... but you are correct, if this is running this
> > > slow I expect that nothing from my CI will work reliably.  
> > 
> > I'm really interested to see if q35 helps here. If that's the case maybe
> > we should default to q35 in virtme-ng when KVM isn't available (even if
> > on my box q35 is actually slower than the default pc, so in that case we
> > may need to come up with some logic to pick the right machine type).
> 
> it might be interesting to find out why q35 is slower (it shouldn't be)

Nevermind, I was comparing native kvm vs q35, of course it was slower...

> With above config one can put all devices on hostbridge as integrated endpoints
> which roughly will be the same as PCI topo in 'pc' machine)
> 
> another thing that might help is adding '-cpu max' instead of default
> qemu64 cpu model.

Doing the proper comparison (disabling kvm), adding '-cpu max' to the
equation and measuring the boot time of multiple virtme-ng runs, gives
me the following result (average of 10 runs):

                     machine
              +----------------
              | default     q35
     ---------+----------------
cpu  |default |     13s     11s
     |max     |     15s     14s

I've tried a couple of kernel configs and I get similar results.

In the scope of virtme-ng (optimize boot time) I'd say that it'd makes
sense to use '-machine q35' and default cpu settings when kvm is
unavailable.

Ricardo, do you see similar results?

Thanks,
-Andrea




[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux