On Mon, Feb 05, 2024 at 04:34:49PM +0100, Gregory CLEMENT wrote: > From: Jiaxun Yang <jiaxun.yang@xxxxxxxxxxx> > > Now the exception vector for CPS systems are allocated on-fly > with memblock as well. > > It will try to allocate from KSEG1 first, and then try to allocate > in low 4G if possible. > > The main reset vector is now generated by uasm, to avoid tons > of patches to the code. Other vectors are copied to the location > later. > > gc: use the new macro CKSEG[0A1]DDR_OR_64BIT() > move 64bits fix in an other patch > fix cache issue with mips_cps_core_entry > rewrite the patch to reduce the diff stat > Signed-off-by: Jiaxun Yang <jiaxun.yang@xxxxxxxxxxx> > Signed-off-by: Gregory CLEMENT <gregory.clement@xxxxxxxxxxx> > --- > arch/mips/include/asm/mips-cm.h | 1 + > arch/mips/include/asm/smp-cps.h | 4 +- > arch/mips/kernel/cps-vec.S | 48 ++------- > arch/mips/kernel/smp-cps.c | 171 +++++++++++++++++++++++++++----- > 4 files changed, 157 insertions(+), 67 deletions(-) > [..] > diff --git a/arch/mips/kernel/smp-cps.c b/arch/mips/kernel/smp-cps.c > index dd55d59b88db3..f4cdd50177e0b 100644 > --- a/arch/mips/kernel/smp-cps.c > +++ b/arch/mips/kernel/smp-cps.c > @@ -7,6 +7,7 @@ > #include <linux/cpu.h> > #include <linux/delay.h> > #include <linux/io.h> > +#include <linux/memblock.h> > #include <linux/sched/task_stack.h> > #include <linux/sched/hotplug.h> > #include <linux/slab.h> > @@ -25,7 +26,34 @@ > #include <asm/time.h> > #include <asm/uasm.h> > > +#define BEV_VEC_SIZE 0x500 > +#define BEV_VEC_ALIGN 0x1000 > + > +#define A0 4 > +#define A1 5 > +#define T9 25 > +#define K0 26 > +#define K1 27 > + > +#define C0_STATUS 12, 0 > +#define C0_CAUSE 13, 0 > + > +#define ST0_NMI_BIT 19 > +#ifdef CONFIG_64BIT > +#define ST0_KX_IF_64 ST0_KX > +#else > +#define ST0_KX_IF_64 0 > +#endif please move this together with the other defines in arch/mips/kvm/entry.c to a header file (arch/mips/include/asm/uasm.h sounds like a good fit). > +static void __init setup_cps_vecs(void) > +{ > + extern void excep_tlbfill(void); > + extern void excep_xtlbfill(void); > + extern void excep_cache(void); > + extern void excep_genex(void); > + extern void excep_intex(void); > + extern void excep_ejtag(void); I know this used a lot in arch/mips, but don't add another one and put this to a header file. IMHO checkpatch should have warned you about that. > + /* We want to ensure cache is clean before writing uncached mem */ > + blast_dcache_range(CKSEG0ADDR_OR_64BIT(cps_vec_pa), CKSEG0ADDR_OR_64BIT(cps_vec_pa) + BEV_VEC_SIZE); > + bc_wback_inv(CKSEG0ADDR_OR_64BIT(cps_vec_pa), BEV_VEC_SIZE); > + __sync(); how about doint the generation with cached memory and flush caches after that ? Thomas. -- Crap can work. Given enough thrust pigs will fly, but it's not necessarily a good idea. [ RFC1925, 2.3 ]