On Wed, Sep 01, 2021 at 11:48:12PM +0000, Oliver Upton wrote: > On Wed, Sep 01, 2021 at 09:14:07PM +0000, Raghavendra Rao Ananta wrote: > > At times, such as when in the interrupt handler, the guest wants to > > get the vCPU-id that it's running on. As a result, introduce > > get_vcpuid() that parses the MPIDR_EL1 and returns the vcpuid to the > > requested caller. > > > > Signed-off-by: Raghavendra Rao Ananta <rananta@xxxxxxxxxx> > > --- > > .../selftests/kvm/include/aarch64/processor.h | 19 +++++++++++++++++++ > > 1 file changed, 19 insertions(+) > > > > diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h > > index c35bb7b8e870..8b372cd427da 100644 > > --- a/tools/testing/selftests/kvm/include/aarch64/processor.h > > +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h > > @@ -251,4 +251,23 @@ static inline void local_irq_disable(void) > > asm volatile("msr daifset, #3" : : : "memory"); > > } > > > > +#define MPIDR_LEVEL_BITS 8 > > +#define MPIDR_LEVEL_SHIFT(level) (MPIDR_LEVEL_BITS * level) > > +#define MPIDR_LEVEL_MASK ((1 << MPIDR_LEVEL_BITS) - 1) > > +#define MPIDR_AFFINITY_LEVEL(mpidr, level) \ > > + ((mpidr >> MPIDR_LEVEL_SHIFT(level)) & MPIDR_LEVEL_MASK) > > + > > +static inline uint32_t get_vcpuid(void) > > +{ > > + uint32_t vcpuid = 0; > > + uint64_t mpidr = read_sysreg(mpidr_el1); > > + > > + /* KVM limits only 16 vCPUs at level 0 */ > > + vcpuid = mpidr & 0x0f; > > + vcpuid |= MPIDR_AFFINITY_LEVEL(mpidr, 1) << 4; > > + vcpuid |= MPIDR_AFFINITY_LEVEL(mpidr, 2) << 12; > > + > > + return vcpuid; > > +} > > Are we guaranteed that KVM will always compose vCPU IDs the same way? I > do not believe this is guaranteed ABI. I don't believe we are. At least in QEMU we take pains to avoid that assumption. > > For the base case, you could pass the vCPU ID as an arg to the guest > function. > > I do agree that finding the vCPU ID is a bit more challenging in an > interrupt context. Maybe use a ucall to ask userspace? But of course, > every test implements its own run loop, so its yet another case that > tests need to handle. > > Or, you could allocate an array at runtime of length KVM_CAP_MAX_VCPUS > (use the KVM_CHECK_EXTENSION ioctl to get the value). Once all vCPUs are > instantiated, iterate over them from userspace to populate the {MPIDR, > VCPU_ID} map. You'd need to guarantee that callers initialize the vGIC > *after* adding vCPUs to the guest. I agree with this approach. It may even make sense to create a common function that returns a {cpu_id,vcpu_index} map for other tests to use. Thanks, drew > > -- > Thanks, > Oliver > > > #endif /* SELFTEST_KVM_PROCESSOR_H */ > > -- > > 2.33.0.153.gba50c8fa24-goog > > > _______________________________________________ > kvmarm mailing list > kvmarm@xxxxxxxxxxxxxxxxxxxxx > https://lists.cs.columbia.edu/mailman/listinfo/kvmarm >