> > -#define this_cpu_ptr_nvhe(sym) this_cpu_ptr(&kvm_nvhe_sym(sym)) > > -#define per_cpu_ptr_nvhe(sym, cpu) per_cpu_ptr(&kvm_nvhe_sym(sym), cpu) > > +/* Array of percpu base addresses. Length of the array is nr_cpu_ids. */ > > +extern unsigned long *kvm_arm_hyp_percpu_base; > > + > > +/* > > + * Compute pointer to a symbol defined in nVHE percpu region. > > + * Returns NULL if percpu memory has not been allocated yet. > > + */ > > +#define this_cpu_ptr_nvhe(sym) per_cpu_ptr_nvhe(sym, smp_processor_id()) > > Don't you run into similar problems here with the pcpu accessors when > CONFIG_DEBUG_PREEMPT=y? I'm worried we can end up with an unresolved > reference to debug_smp_processor_id(). Fortunately not. This now doesn't use the generic macros at all. > > /* The VMID used in the VTTBR */ > > static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1); > > @@ -1258,6 +1259,15 @@ long kvm_arch_vm_ioctl(struct file *filp, > > } > > } > > > > +#define kvm_hyp_percpu_base(cpu) ((unsigned long)per_cpu_ptr_nvhe(__per_cpu_start, cpu)) > > Having both kvm_arm_hyp_percpu_base and kvm_hyp_percpu_base be so > completely different is a recipe for disaster! Please can you rename > one/both of them to make it clearer what they represent? I am heavily simplifying this code in v4. Got rid of this macro completely, so hopefully there will be no confusion. > > - if (!kvm_pmu_switch_needed(attr)) > > + if (!ctx || !kvm_pmu_switch_needed(attr)) > > return; > > > > if (!attr->exclude_host) > > @@ -49,6 +49,9 @@ void kvm_clr_pmu_events(u32 clr) > > { > > struct kvm_host_data *ctx = this_cpu_ptr_hyp(kvm_host_data); > > > > + if (!ctx) > > + return; > > I guess this should only happen if KVM failed to initialise or something, > right? (e.g. if we were booted at EL1). If so, I'm wondering whether it > would be better to do something like: > > if (!is_hyp_mode_available()) > return; > > WARN_ON_ONCE(!ctx); > > so that any unexpected NULL pointer there screams loudly, rather than causes > the register switch to be silently ignored. What do you think? Unfortunately, this happens on every boot. I don't fully understand how the boot order is determined, so please correct me if this makes no sense, but kvm_clr_pmu_events is called as part of CPUHP_AP_PERF_ARM_STARTING. The first time that happens is before KVM initialized (tested from inserting BUG_ON(!ctx)). That's not a problem, the per-CPU memory is there and it's all zeroes. It becomes a problem with this patch because the per-CPU memory is not there *yet*. _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm