Re: [PATCH kvm-unit-tests] arm: fix crash when caches are off

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




----- Original Message -----
> Il 16/09/2014 04:06, Andrew Jones ha scritto:
> > We shouldn't try Load-Exclusive instructions unless we've enabled memory
> > management, as these instructions depend on the data cache unit's
> > coherency monitor. This patch adds a new setup boolean, initialized to
> > false,
> > that is used to guard Load-Exclusive instructions. Eventually we'll add
> > more
> > setup code that sets it true.
> > 
> > Note: This problem became visible on boards with Cortex-A7 processors.
> > Testing
> > with Cortex-A15 didn't expose it, as those may have an external coherency
> > monitor that still allows the instruction to execute (on A7s we got data
> > aborts). Although even on A15's it's not clear from the spec if the
> > instructions will behave as expected while caches are off, so we no longer
> > allow Load-Exclusive instructions on those processors without caches
> > enabled
> > either.
> > 
> > Signed-off-by: Andrew Jones <drjones@xxxxxxxxxx>
> > ---
> >  lib/arm/asm/setup.h |  2 ++
> >  lib/arm/setup.c     |  1 +
> >  lib/arm/spinlock.c  | 10 ++++++++++
> >  3 files changed, 13 insertions(+)
> > 
> > diff --git a/lib/arm/asm/setup.h b/lib/arm/asm/setup.h
> > index 21445ef2085fc..9c54c184e2866 100644
> > --- a/lib/arm/asm/setup.h
> > +++ b/lib/arm/asm/setup.h
> > @@ -20,6 +20,8 @@ extern phys_addr_t __phys_offset, __phys_end;
> >  #define PHYS_SIZE		(1ULL << PHYS_SHIFT)
> >  #define PHYS_MASK		(PHYS_SIZE - 1ULL)
> >  
> > +extern bool mem_caches_enabled;
> > +
> >  #define L1_CACHE_SHIFT		6
> >  #define L1_CACHE_BYTES		(1 << L1_CACHE_SHIFT)
> >  #define SMP_CACHE_BYTES		L1_CACHE_BYTES
> > diff --git a/lib/arm/setup.c b/lib/arm/setup.c
> > index 3941c9757dcb2..f7ed639c9d499 100644
> > --- a/lib/arm/setup.c
> > +++ b/lib/arm/setup.c
> > @@ -25,6 +25,7 @@ u32 cpus[NR_CPUS] = { [0 ... NR_CPUS-1] = (~0UL) };
> >  int nr_cpus;
> >  
> >  phys_addr_t __phys_offset, __phys_end;
> > +bool mem_caches_enabled;
> >  
> >  static void cpu_set(int fdtnode __unused, u32 regval, void *info __unused)
> >  {
> > diff --git a/lib/arm/spinlock.c b/lib/arm/spinlock.c
> > index d8a6d4c3383d6..43539c5e84062 100644
> > --- a/lib/arm/spinlock.c
> > +++ b/lib/arm/spinlock.c
> > @@ -1,12 +1,22 @@
> >  #include "libcflat.h"
> >  #include "asm/spinlock.h"
> >  #include "asm/barrier.h"
> > +#include "asm/setup.h"
> >  
> >  void spin_lock(struct spinlock *lock)
> >  {
> >  	u32 val, fail;
> >  
> >  	dmb();
> > +
> > +	/*
> > +	 * Without caches enabled Load-Exclusive instructions may fail.
> > +	 * In that case we do nothing, and just hope the caller knows
> > +	 * what they're doing.
> > +	 */
> > +	if (!mem_caches_enabled)
> > +		return;
> 
> Should it at least write 1 to the spinlock?

I thought about that. So on one hand we might get a somewhat functional
synchronization mechanism, which may be enough for some unit test that
doesn't enable caches, but still needs it. On the other hand, we know
its broken, so we don't really want any unit tests that need synchronization
and don't enable caches. I chose to not write a 1 in the hope that if
a unit test introduces a race, that that race will be easier to expose
and fix. That said, I'm not strongly biased, as we'd still have a race,
which may or may not be easy to expose, either way. So if the majority
prefers a best effort approach, then I'll spin a v2.

drew

> 
> Paolo
> 
> >  	do {
> >  		asm volatile(
> >  		"1:	ldrex	%0, [%2]\n"
> > 
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux