On Thu, Oct 27, 2022 at 10:17:45PM +0000, Oliver Upton wrote: > The use of RCU is necessary to safely change the stage-2 page tables in > parallel. Acquire and release the RCU read lock when traversing the page > tables. > > Use the _raw() flavor of rcu_dereference when changes to the page tables > are otherwise protected from parallel software walkers (e.g. holding the > write lock). > > Signed-off-by: Oliver Upton <oliver.upton@xxxxxxxxx> > --- > arch/arm64/include/asm/kvm_pgtable.h | 41 ++++++++++++++++++++++++++++ > arch/arm64/kvm/hyp/pgtable.c | 10 ++++++- > 2 files changed, 50 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h > index e70cf57b719e..d1859e8550df 100644 > --- a/arch/arm64/include/asm/kvm_pgtable.h > +++ b/arch/arm64/include/asm/kvm_pgtable.h > @@ -37,6 +37,13 @@ static inline u64 kvm_get_parange(u64 mmfr0) > > typedef u64 kvm_pte_t; > > +/* > + * RCU cannot be used in a non-kernel context such as the hyp. As such, page > + * table walkers used in hyp do not call into RCU and instead use other > + * synchronization mechanisms (such as a spinlock). > + */ > +#if defined(__KVM_NVHE_HYPERVISOR__) || defined(__KVM_VHE_HYPERVISOR__) > + > typedef kvm_pte_t *kvm_pteref_t; > > static inline kvm_pte_t *kvm_dereference_pteref(kvm_pteref_t pteref, bool shared) > @@ -44,6 +51,40 @@ static inline kvm_pte_t *kvm_dereference_pteref(kvm_pteref_t pteref, bool shared > return pteref; > } > > +static inline void kvm_pgtable_walk_begin(void) {} > +static inline void kvm_pgtable_walk_end(void) {} > + > +static inline bool kvm_pgtable_walk_lock_held(void) > +{ > + return true; > +} > + > +#else > + > +typedef kvm_pte_t __rcu *kvm_pteref_t; > + > +static inline kvm_pte_t *kvm_dereference_pteref(kvm_pteref_t pteref, bool shared) > +{ > + return rcu_dereference_check(pteref, shared); I accidentally squashed the fix for !shared into 9/15, not this patch. Fix ready for v4. -- Thanks, Oliver