Re: [PATCH] s390x/spinlock: Provide vcpu_is_preempted globally

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 29 Sep 2016 13:54:16 +0200
Christian Borntraeger <borntraeger@xxxxxxxxxx> wrote:

> this implements the s390 backend for commit
> "kernel/sched: introduce vcpu preempted check interface"
> by simply reusing the existing cpu_is_preempted function.
> 
> Signed-off-by: Christian Borntraeger <borntraeger@xxxxxxxxxx>
> ---
> Martin, Heiko,
> 
> this patch is a minimal change by not touching all existing users of
> cpu_is_preempted in spinlock.c. If you want it differently, let me
> know.
> 
> 
>  arch/s390/include/asm/spinlock.h | 7 +++++++
>  arch/s390/lib/spinlock.c         | 3 ++-
>  2 files changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/s390/include/asm/spinlock.h b/arch/s390/include/asm/spinlock.h
> index 63ebf37..6e82986 100644
> --- a/arch/s390/include/asm/spinlock.h
> +++ b/arch/s390/include/asm/spinlock.h
> @@ -21,6 +21,13 @@ _raw_compare_and_swap(unsigned int *lock, unsigned int old, unsigned int new)
>  	return __sync_bool_compare_and_swap(lock, old, new);
>  }
> 
> +int arch_vcpu_is_preempted(int cpu);
> +#define vcpu_is_preempted cpu_is_preempted
> +static inline bool cpu_is_preempted(int cpu)
> +{
> +	return arch_vcpu_is_preempted(cpu);
> +}
> +
>  /*
>   * Simple spin lock operations.  There are two variants, one clears IRQ's
>   * on the local processor, one does not.
> diff --git a/arch/s390/lib/spinlock.c b/arch/s390/lib/spinlock.c
> index e5f50a7..9f473c8 100644
> --- a/arch/s390/lib/spinlock.c
> +++ b/arch/s390/lib/spinlock.c
> @@ -37,7 +37,7 @@ static inline void _raw_compare_and_delay(unsigned int *lock, unsigned int old)
>  	asm(".insn rsy,0xeb0000000022,%0,0,%1" : : "d" (old), "Q" (*lock));
>  }
> 
> -static inline int cpu_is_preempted(int cpu)
> +int arch_vcpu_is_preempted(int cpu)
>  {
>  	if (test_cpu_flag_of(CIF_ENABLED_WAIT, cpu))
>  		return 0;
> @@ -45,6 +45,7 @@ static inline int cpu_is_preempted(int cpu)
>  		return 0;
>  	return 1;
>  }
> +EXPORT_SYMBOL(arch_vcpu_is_preempted);
> 
>  void arch_spin_lock_wait(arch_spinlock_t *lp)
>  {

Hmm, if I look at the code we now have an additional function for
the spinlock loops. The call arch_vcpu_is_preempted which test
CIF_ENABLED_WAIT and then calls smp_vcpu_scheduled(). The test
used to be inline.

A better solution would be to move the CIF_ENABLED_WAIT test to the
smp_vcpu_scheduled() function, rename it to arch_vcpu_is_preempted()
and then export that function. The cpu_is_preempted() function is
replaced by arch_vcpu_is_preempted() which does make a lot of sense,
no?

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization



[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux