On Thu, 2022-04-28 at 20:16 +0300, Maxim Levitsky wrote: > On Thu, 2022-04-28 at 15:32 +0000, Sean Christopherson wrote: > > On Tue, Apr 26, 2022, Maxim Levitsky wrote: > > > I can reproduce this in a VM, by running and CTRL+C'in my ipi_stress test, > > > > Can you post your ipi_stress test? I'm curious to see if I can repro, and also > > very curious as to what might be unique about your test. I haven't been able to > > repro the syzbot test, nor have I been able to repro by killing VMs/tests. > > > > This is the patch series (mostly attempt to turn svm to mini library, > but I don't know if this is worth it. > It was done so that ipi_stress could use nesting itself to wait for IPI > from within a nested guest. I usually don't use it. > > This is more or less how I was running it lately (I have a wrapper script) > > > ./x86/run x86/ipi_stress.flat \ > -global kvm-pit.lost_tick_policy=discard \ > -machine kernel-irqchip=on -name debug-threads=on \ > \ > -smp 8 \ > -cpu host,x2apic=off,svm=off,-hypervisor \ > -overcommit cpu-pm=on \ > -m 4g -append "0 10000" I forgot to mention: this should be run in a loop. Best regards, Maxim Levitsky > > > Its not fully finised for upstream, I will get to it soon. > > 'cpu-pm=on' won't work for you as this fails due to non atomic memslot > update bug for which I have a small hack in qemu, and it is on my > backlog to fix it correctly. > > Mostly likely cpu_pm=off will also reproduce it. > > > Test was run in a guest, natively this doesn't seem to reproduce. > tdp mmu was used for both L0 and L1. > > Best regards, > Maxim levitsky