Re: KVM userspace GICv2 IRQ controller on platform with GICv3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04.10.21 15:11, Marc Zyngier wrote:
On Mon, 04 Oct 2021 12:27:33 +0100,
Lukas Jünger <lukas.juenger@xxxxxxxxxxxxxxxxxx> wrote:
[1  <multipart/mixed (en-US) (7bit)>]
[1.1  <text/plain; utf-8 (quoted-printable)>]
On 04.10.21 13:02, Marc Zyngier wrote:
On Mon, 04 Oct 2021 11:30:06 +0100,
Lukas Jünger <lukas.juenger@xxxxxxxxxxxxxxxxxx> wrote:
[1  <text/plain (en-US); utf-8 (quoted-printable)>]
On 04.10.21 12:24, Marc Zyngier wrote:
Hi Lukas,
Hi Mark,

Thanks for your quick reply.

On Mon, 04 Oct 2021 11:07:47 +0100,
Lukas Jünger <lukas.juenger@xxxxxxxxxxxxxxxxxx> wrote:
Hello,

I am trying to run an emulator that uses KVM on arm64 to execute
code. The emulator contains a userspace model of a GICv2 IRQ
controller. The platform that I am running on (n1sdp) has a
N1-SDP? My condolences...
Is there more to this?
How do you like the PCI patches? :D
Ah, that's what you were alluding to. PCI+ARM seems to be tricky
somehow. The SynQuacer dev box as well as the ROCKPro 64 I was using
before also had PCI issues.
I have no idea what you are running with, but neither of these two
machines have any issue with PCI here. What is your kernel version?

[...]

Not related to this issue, but the SynQuacer Developer Box has some issues with the GPU that was shipped with it.
There are jumper settings for a firmware workaround, etc..
For the ROCKPro64, I tried using it with an Infiniband PCIe adapter, but could not get it to boot.
But as I said, unrelated to this issue.

The port to N1-SDP is
giving me trouble. I understand why it is tainting the kernel, I was
just wondering if I could somehow tell KVM to set this up correctly,
e.g. by setting the ICC_SRE_ELx.
KVM doesn't *set* ICC_SRE_EL1.SRE. It is RAO/WI on this machine, which
is perfectly legal. However, KVM traps this access and emulates it
(access_gic_sre() returns vcpu->arch.vgic_cpu.vgic_v3.vgic_sre).

So if you see ICC_SRE_EL1.SRE==1 in your guest, that's because
vgic_sre is set to something that is non-zero. The only way for this
bit to be set is in vgic_v3_enable(), which has the following code:

<quote>
	if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) {
		vgic_v3->vgic_sre = (ICC_SRE_EL1_DIB |
				     ICC_SRE_EL1_DFB |
				     ICC_SRE_EL1_SRE);
		vcpu->arch.vgic_cpu.pendbaser = INITIAL_PENDBASER_VALUE;
	} else {
		vgic_v3->vgic_sre = 0;
	}
</quote>

So short of a terrible bug that would dump random values in this
structure, you are setting vgic_model to a GICv3 implementation. This
can only be done from userspace if you are creating a GICv3 irqchip.

Without seeing what your userspace does, I'm afraid I can't help you
much further. Can you please provide some traces of what it does? A
strace dump would certainly help.
Could it be that this is because I use KVM_ARM_PREFERRED_TARGET and
init the vcpu from this config?
No, that's completely irrelevant.

I have attached an strace log file.
I can't see anything useful there:

openat(AT_FDCWD, "/dev/kvm", O_RDWR)    = 7

// create VM
ioctl(7, _IOC(0, 0xae, 0x1, 0), 0)      = 8

// create vcpu
ioctl(8, _IOC(0, 0xae, 0x41, 0), 0)     = 9

// two memslots
ioctl(8, _IOC(_IOC_WRITE, 0xae, 0x46, 0x20), {slot=0, flags=0, guest_phys_addr=0, memory_size=268435456, userspace_addr=0xffff87a00000}) = 0
ioctl(8, _IOC(_IOC_WRITE, 0xae, 0x46, 0x20), {slot=1, flags=0, guest_phys_addr=0xc0000000, memory_size=268435456, userspace_addr=0xffff44e00000}) = 0

// get kvm_run size, map it
ioctl(7, _IOC(0, 0xae, 0x4, 0), 0)      = 8192
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_SHARED, 9, 0) = 0xffff987ad000

// get KVM_ARM_PREFERRED_TARGET
ioctl(8, _IOC(_IOC_READ, 0xae, 0xaf, 0x20), 0xffffe8018b98) = 0

// vcpu init
ioctl(9, _IOC(_IOC_WRITE, 0xae, 0xae, 0x20), 0xffffe8018b98) = 0

// KVM_CAP_SYNC_MMU?
ioctl(8, _IOC(0, 0xae, 0x3, 0), 0x10)   = 1
I think so, at least I use this ioctl.
// KVM_CAP_GUEST_DEBUG_HW_BPS?
ioctl(8, _IOC(0, 0xae, 0x3, 0), 0x77)   = 6
Same.
// KVM_SET_GUEST_DEBUG
ioctl(9, _IOC(_IOC_WRITE, 0xae, 0x9b, 0x208), 0xffff4447fbf8) = 0

// RUN
ioctl(9, _IOC(0, 0xae, 0x80, 0), 0)     = -1 EINTR (Interrupted system call)

So either you run something that is pretty old and buggy (and I'd like
to know what), or you have uncovered a bug and I would need you to
trace when vgic_sre gets set.
Okay. I'm running on N1-SDP with the latest release 2021.05.26.
uname -a gives:

Linux n1sdp 5.10.12+ #1 SMP Fri Oct 1 11:50:05 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux


Is there a way to debug this without a hardware debugger/JTAG?

Thanks,

	M.

Thanks again,

Lukas



Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux