Re: [PATCH 1/2] powerpc/pseries/svm: Don't access some SPRs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sukadev Bhattiprolu <sukadev@xxxxxxxxxxxxx> writes:
> Ultravisor disables some CPU features like EBB and BHRB in the HFSCR
> for secure virtual machines (SVMs). If the SVMs attempt to access
> related registers, they will get a Program Interrupt.
>
> Use macros/wrappers to skip accessing EBB and BHRB registers in secure
> VMs.

I continue to dislike this approach.

The result is code that at a glance looks like it's doing one thing,
ie. reading or writing an SPR, but is in fact doing nothing.

It's confusing for readers.

It also slows down all these already slow register accesses further, by
inserting an mfmsr() in front of every single one.


> diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
> index b3cbb1136bce..026eb20f6d13 100644
> --- a/arch/powerpc/include/asm/reg.h
> +++ b/arch/powerpc/include/asm/reg.h
> @@ -1379,6 +1379,41 @@ static inline void msr_check_and_clear(unsigned long bits)
>  		__msr_check_and_clear(bits);
>  }
>  
> +#ifdef CONFIG_PPC_SVM
> +/*
> + * Move from some "restricted" sprs.
> + * Secure VMs should not access some registers as the related features
> + * are disabled in the CPU. If an SVM is attempting read from the given
> + * SPR, return 0. Otherwise behave like a normal mfspr.
> + */
> +#define mfspr_r(rn)						\
> +({								\
> +	unsigned long rval = 0ULL;				\
> +								\
> +	if (!(mfmsr() & MSR_S))					\
> +		asm volatile("mfspr %0," __stringify(rn)	\
> +				: "=r" (rval));			\
> +	rval;							\
> +})
> +
> +/*
> + * Move to some "restricted" sprs.
> + * Secure VMs should not access some registers as the related features
> + * are disabled in the CPU. If an SVM is attempting write to the given
> + * SPR, ignore the write. Otherwise behave like a normal mtspr.
> + */
> +#define mtspr_r(rn, v)					\
> +({								\
> +	if (!(mfmsr() & MSR_S))					\
> +		asm volatile("mtspr " __stringify(rn) ",%0" :	\
> +				     : "r" ((unsigned long)(v)) \
> +				     : "memory");		\
> +})
> +#else
> +#define mfspr_r		mfspr
> +#define mtspr_r		mtspr
> +#endif
> +
>  #ifdef __powerpc64__
>  #if defined(CONFIG_PPC_CELL) || defined(CONFIG_PPC_FSL_BOOK3E)
>  #define mftb()		({unsigned long rval;				\
> diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
> index 639ceae7da9d..9a691452ea3b 100644
> --- a/arch/powerpc/kernel/process.c
> +++ b/arch/powerpc/kernel/process.c
> @@ -1059,9 +1059,9 @@ static inline void save_sprs(struct thread_struct *t)
>  		t->dscr = mfspr(SPRN_DSCR);
>  
>  	if (cpu_has_feature(CPU_FTR_ARCH_207S)) {
> -		t->bescr = mfspr(SPRN_BESCR);
> -		t->ebbhr = mfspr(SPRN_EBBHR);
> -		t->ebbrr = mfspr(SPRN_EBBRR);
> +		t->bescr = mfspr_r(SPRN_BESCR);
> +		t->ebbhr = mfspr_r(SPRN_EBBHR);
> +		t->ebbrr = mfspr_r(SPRN_EBBRR);

eg. here.

This is the fast path of context switch.

That expands to:

	if (!(mfmsr() & MSR_S))
		asm volatile("mfspr %0, SPRN_BESCR" : "=r" (rval));
	if (!(mfmsr() & MSR_S))
		asm volatile("mfspr %0, SPRN_EBBHR" : "=r" (rval));
	if (!(mfmsr() & MSR_S))
		asm volatile("mfspr %0, SPRN_EBBRR" : "=r" (rval));


If the Ultravisor is going to disable EBB and BHRB then we need new
CPU_FTR bits for those, and the code that accesses those registers
needs to be put behind cpu_has_feature(EBB) etc.

cheers



[Index of Archives]     [KVM Development]     [KVM ARM]     [KVM ia64]     [Linux Virtualization]     [Linux USB Devel]     [Linux Video]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Big List of Linux Books]

  Powered by Linux