On 02/03/17 18:31, Paul Burton wrote: > We allocate memory for a ready_count variable per-CPU, which is accessed > via a cached non-coherent TLB mapping to perform synchronisation between > threads within the core using LL/SC instructions. In order to ensure > that the variable is contained within its own data cache line we > allocate 2 lines worth of memory & align the resulting pointer to a line > boundary. This is however unnecessary, since kmalloc is guaranteed to > return memory which is at least cache-line aligned (see > ARCH_DMA_MINALIGN). Stop the redundant manual alignment. > > Besides cleaning up the code & avoiding needless work, this has the side > effect of avoiding an arithmetic error found by Brian on 64 bit systems Small nit 'Bryan' not 'Brian' - never mind. > due to the 32 bit size of the former dlinesz. This led the ready_count > variable to have its upper 32b cleared erroneously for MIPS64 kernels, > causing problems when ready_count was later used on MIPS64 via cpuidle. > > Signed-off-by: Paul Burton <paul.burton@xxxxxxxxxx> > Fixes: 3179d37ee1ed ("MIPS: pm-cps: add PM state entry code for CPS systems") > Reported-by: Bryan O'Donoghue <bryan.odonoghue@xxxxxxxxxx> > Cc: Bryan O'Donoghue <bryan.odonoghue@xxxxxxxxxx> > Cc: linux-mips@xxxxxxxxxxxxxx > Cc: ralf@xxxxxxxxxxxxxx > Cc: stable <stable@xxxxxxxxxxxxxxx> # v3.16+ Reviewed-by: Bryan O'Donoghue <bryan.odonoghue@xxxxxxxxxx> Tested-by: Bryan O'Donoghue <bryan.odonoghue@xxxxxxxxxx>