On Mon, 04 Oct 2021 12:27:33 +0100, Lukas Jünger <lukas.juenger@xxxxxxxxxxxxxxxxxx> wrote: > > [1 <multipart/mixed (en-US) (7bit)>] > [1.1 <text/plain; utf-8 (quoted-printable)>] > On 04.10.21 13:02, Marc Zyngier wrote: > > On Mon, 04 Oct 2021 11:30:06 +0100, > > Lukas Jünger <lukas.juenger@xxxxxxxxxxxxxxxxxx> wrote: > >> [1 <text/plain (en-US); utf-8 (quoted-printable)>] > >> On 04.10.21 12:24, Marc Zyngier wrote: > >>> Hi Lukas, > >> Hi Mark, > >> > >> Thanks for your quick reply. > >> > >>> On Mon, 04 Oct 2021 11:07:47 +0100, > >>> Lukas Jünger <lukas.juenger@xxxxxxxxxxxxxxxxxx> wrote: > >>>> Hello, > >>>> > >>>> I am trying to run an emulator that uses KVM on arm64 to execute > >>>> code. The emulator contains a userspace model of a GICv2 IRQ > >>>> controller. The platform that I am running on (n1sdp) has a > >>> N1-SDP? My condolences... > >> Is there more to this? > > How do you like the PCI patches? :D > Ah, that's what you were alluding to. PCI+ARM seems to be tricky > somehow. The SynQuacer dev box as well as the ROCKPro 64 I was using > before also had PCI issues. I have no idea what you are running with, but neither of these two machines have any issue with PCI here. What is your kernel version? [...] > >> The port to N1-SDP is > >> giving me trouble. I understand why it is tainting the kernel, I was > >> just wondering if I could somehow tell KVM to set this up correctly, > >> e.g. by setting the ICC_SRE_ELx. > > KVM doesn't *set* ICC_SRE_EL1.SRE. It is RAO/WI on this machine, which > > is perfectly legal. However, KVM traps this access and emulates it > > (access_gic_sre() returns vcpu->arch.vgic_cpu.vgic_v3.vgic_sre). > > > > So if you see ICC_SRE_EL1.SRE==1 in your guest, that's because > > vgic_sre is set to something that is non-zero. The only way for this > > bit to be set is in vgic_v3_enable(), which has the following code: > > > > <quote> > > if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) { > > vgic_v3->vgic_sre = (ICC_SRE_EL1_DIB | > > ICC_SRE_EL1_DFB | > > ICC_SRE_EL1_SRE); > > vcpu->arch.vgic_cpu.pendbaser = INITIAL_PENDBASER_VALUE; > > } else { > > vgic_v3->vgic_sre = 0; > > } > > </quote> > > > > So short of a terrible bug that would dump random values in this > > structure, you are setting vgic_model to a GICv3 implementation. This > > can only be done from userspace if you are creating a GICv3 irqchip. > > > > Without seeing what your userspace does, I'm afraid I can't help you > > much further. Can you please provide some traces of what it does? A > > strace dump would certainly help. > > Could it be that this is because I use KVM_ARM_PREFERRED_TARGET and > init the vcpu from this config? No, that's completely irrelevant. > I have attached an strace log file. I can't see anything useful there: openat(AT_FDCWD, "/dev/kvm", O_RDWR) = 7 // create VM ioctl(7, _IOC(0, 0xae, 0x1, 0), 0) = 8 // create vcpu ioctl(8, _IOC(0, 0xae, 0x41, 0), 0) = 9 // two memslots ioctl(8, _IOC(_IOC_WRITE, 0xae, 0x46, 0x20), {slot=0, flags=0, guest_phys_addr=0, memory_size=268435456, userspace_addr=0xffff87a00000}) = 0 ioctl(8, _IOC(_IOC_WRITE, 0xae, 0x46, 0x20), {slot=1, flags=0, guest_phys_addr=0xc0000000, memory_size=268435456, userspace_addr=0xffff44e00000}) = 0 // get kvm_run size, map it ioctl(7, _IOC(0, 0xae, 0x4, 0), 0) = 8192 mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_SHARED, 9, 0) = 0xffff987ad000 // get KVM_ARM_PREFERRED_TARGET ioctl(8, _IOC(_IOC_READ, 0xae, 0xaf, 0x20), 0xffffe8018b98) = 0 // vcpu init ioctl(9, _IOC(_IOC_WRITE, 0xae, 0xae, 0x20), 0xffffe8018b98) = 0 // KVM_CAP_SYNC_MMU? ioctl(8, _IOC(0, 0xae, 0x3, 0), 0x10) = 1 // KVM_CAP_GUEST_DEBUG_HW_BPS? ioctl(8, _IOC(0, 0xae, 0x3, 0), 0x77) = 6 // KVM_SET_GUEST_DEBUG ioctl(9, _IOC(_IOC_WRITE, 0xae, 0x9b, 0x208), 0xffff4447fbf8) = 0 // RUN ioctl(9, _IOC(0, 0xae, 0x80, 0), 0) = -1 EINTR (Interrupted system call) So either you run something that is pretty old and buggy (and I'd like to know what), or you have uncovered a bug and I would need you to trace when vgic_sre gets set. Thanks, M. -- Without deviation from the norm, progress is not possible. _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm