Colton, On Thu, 06 Feb 2025 00:17:44 +0000, Colton Lewis <coltonlewis@xxxxxxxxxx> wrote: > > asm/kvm_host.h includes asm/arm_pmu.h which includes perf/arm_pmuv3.h > which includes asm/arm_pmuv3.h which includes asm/kvm_host.h This > causes confusing compilation problems why trying to use anything > defined in any of the headers in any other headers. Header guards is > the only reason this cycle didn't create tons of redefinition > warnings. > > The motivating example was figuring out it was impossible to use the > hypercall macros kvm_call_hyp* from kvm_host.h in arm_pmuv3.h. The > compiler will insist they aren't defined even though kvm_host.h is > included. Many other examples are lurking which could confuse > developers in the future. Well, that's because contrary to what you have asserted in v1, not all include files are legitimate to use willy-nilly. You have no business directly using asm/kvm_host.h, and linux/kvm_host.h is what you need. > > Break the cycle by taking asm/kvm_host.h out of asm/arm_pmuv3.h > because asm/kvm_host.h is huge and we only need a few functions from > it. Move the required declarations to a new header asm/kvm_pmu.h. > > Signed-off-by: Colton Lewis <coltonlewis@xxxxxxxxxx> > --- > > Possibly spinning more definitions out of asm/kvm_host.h would be a > good idea, but I'm not interested in getting bogged down in which > functions ideally belong where. This is sufficient to break the Tough luck. I'm not interested in half baked solutions, and "what belongs where" *is* the problem that needs solving. > cyclical dependency and get rid of the compilation issues. Though I > mention the one example I found, many other similar problems could > confuse developers in the future. > > v2: > * Make a new header instead of moving kvm functions into the > dedicated pmuv3 header > > v1: > https://lore.kernel.org/kvm/20250204195708.1703531-1-coltonlewis@xxxxxxxxxx/ > > arch/arm64/include/asm/arm_pmuv3.h | 3 +-- > arch/arm64/include/asm/kvm_host.h | 14 -------------- > arch/arm64/include/asm/kvm_pmu.h | 26 ++++++++++++++++++++++++++ > include/kvm/arm_pmu.h | 1 - > 4 files changed, 27 insertions(+), 17 deletions(-) > create mode 100644 arch/arm64/include/asm/kvm_pmu.h > > diff --git a/arch/arm64/include/asm/arm_pmuv3.h b/arch/arm64/include/asm/arm_pmuv3.h > index 8a777dec8d88..54dd27a7a19f 100644 > --- a/arch/arm64/include/asm/arm_pmuv3.h > +++ b/arch/arm64/include/asm/arm_pmuv3.h > @@ -6,9 +6,8 @@ > #ifndef __ASM_PMUV3_H > #define __ASM_PMUV3_H > > -#include <asm/kvm_host.h> > - > #include <asm/cpufeature.h> > +#include <asm/kvm_pmu.h> > #include <asm/sysreg.h> > > #define RETURN_READ_PMEVCNTRN(n) \ > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h > index 7cfa024de4e3..6d4a2e7ab310 100644 > --- a/arch/arm64/include/asm/kvm_host.h > +++ b/arch/arm64/include/asm/kvm_host.h > @@ -1385,25 +1385,11 @@ void kvm_arch_vcpu_ctxflush_fp(struct kvm_vcpu *vcpu); > void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu); > void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu); > > -static inline bool kvm_pmu_counter_deferred(struct perf_event_attr *attr) > -{ > - return (!has_vhe() && attr->exclude_host); > -} > - > #ifdef CONFIG_KVM > -void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr); > -void kvm_clr_pmu_events(u64 clr); > -bool kvm_set_pmuserenr(u64 val); > void kvm_enable_trbe(void); > void kvm_disable_trbe(void); > void kvm_tracing_set_el1_configuration(u64 trfcr_while_in_guest); > #else > -static inline void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr) {} > -static inline void kvm_clr_pmu_events(u64 clr) {} > -static inline bool kvm_set_pmuserenr(u64 val) > -{ > - return false; > -} > static inline void kvm_enable_trbe(void) {} > static inline void kvm_disable_trbe(void) {} > static inline void kvm_tracing_set_el1_configuration(u64 trfcr_while_in_guest) {} > diff --git a/arch/arm64/include/asm/kvm_pmu.h b/arch/arm64/include/asm/kvm_pmu.h > new file mode 100644 > index 000000000000..3a8f737504d2 > --- /dev/null > +++ b/arch/arm64/include/asm/kvm_pmu.h > @@ -0,0 +1,26 @@ > +/* SPDX-License-Identifier: GPL-2.0-only */ > + > +#ifndef __KVM_PMU_H > +#define __KVM_PMU_H > + > +void kvm_vcpu_pmu_resync_el0(void); > + > +#ifdef CONFIG_KVM > +void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr); > +void kvm_clr_pmu_events(u64 clr); > +bool kvm_set_pmuserenr(u64 val); > +#else > +static inline void kvm_set_pmu_events(u64 set, struct perf_event_attr *attr) {} > +static inline void kvm_clr_pmu_events(u64 clr) {} > +static inline bool kvm_set_pmuserenr(u64 val) > +{ > + return false; > +} > +#endif > + > +static inline bool kvm_pmu_counter_deferred(struct perf_event_attr *attr) > +{ > + return (!has_vhe() && attr->exclude_host); > +} > + > +#endif How does this solve your problem of using the HYP-calling macros? > diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h > index 147bd3ee4f7b..2c78b1b1a9bb 100644 > --- a/include/kvm/arm_pmu.h > +++ b/include/kvm/arm_pmu.h > @@ -74,7 +74,6 @@ int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu); > struct kvm_pmu_events *kvm_get_pmu_events(void); > void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu); > void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu); > -void kvm_vcpu_pmu_resync_el0(void); > > #define kvm_vcpu_has_pmu(vcpu) \ > (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PMU_V3)) > > base-commit: 2014c95afecee3e76ca4a56956a936e23283f05b I'm absolutely not keen on *two* PMU-related include files. They both describe internal APIs, and don't see a good reasoning for this arbitrary split other than "it works better for me and I don't want to do more than strictly necessary". For example, include/kvm was only introduced to be able to share files between architectures, and with 32bit KVM/arm being long dead, this serves no purpose anymore. Moving these things out of the way would be a good start and would provide a better base for further change. So please present a rationale on what needs to go where and why based on their usage pattern rather than personal convenience, and then we'll look at a possible patch. But not the other way around. Thanks, M. -- Without deviation from the norm, progress is not possible.