On Thu, May 17, 2018 at 12:35:10AM -0700, Andrey Smirnov wrote: > On Thu, May 17, 2018 at 12:08 AM, Sascha Hauer <s.hauer@xxxxxxxxxxxxxx> wrote: > > On Thu, May 17, 2018 at 12:01:34AM -0700, Andrey Smirnov wrote: > >> On Wed, May 16, 2018 at 11:55 PM, Sascha Hauer <s.hauer@xxxxxxxxxxxxxx> wrote: > >> > On Wed, May 16, 2018 at 01:00:17PM -0700, Andrey Smirnov wrote: > >> >> Seeing > >> >> > >> >> create_sections(ttb, 0, PAGE_SIZE, ...); > >> >> > >> >> as the code the creates initial flat 4 GiB mapping is a bit less > >> >> intuitive then > >> >> > >> >> create_sections(ttb, 0, SZ_4G, ...); > >> >> > >> >> so, for the sake of clarification, convert create_sections() to accept > >> >> size in bytes and do bytes -> MiB converstion as a part of the > >> >> function. > >> >> > >> >> NOTE: To keep all of the arguments of create_sections() 32-bit: > >> >> > >> >> - Move all of the real code into a helper function accepting first > >> >> and last addresses of the region (e.g passing 0 and U32_MAX > >> >> means all 4GiB of address space) > >> >> > >> >> - Convert create_section() into a macro that does necessary size > >> >> -> last addres conversion under the hood to preserve original API > >> >> > >> >> Signed-off-by: Andrey Smirnov <andrew.smirnov@xxxxxxxxx> > >> >> --- > >> >> arch/arm/cpu/mmu-early.c | 4 ++-- > >> >> arch/arm/cpu/mmu.c | 4 ++-- > >> >> arch/arm/cpu/mmu.h | 22 ++++++++++++++++------ > >> >> 3 files changed, 20 insertions(+), 10 deletions(-) > >> >> > >> >> diff --git a/arch/arm/cpu/mmu-early.c b/arch/arm/cpu/mmu-early.c > >> >> index 70ece0d2f..136b33c3a 100644 > >> >> --- a/arch/arm/cpu/mmu-early.c > >> >> +++ b/arch/arm/cpu/mmu-early.c > >> >> @@ -16,7 +16,7 @@ static void map_cachable(unsigned long start, unsigned long size) > >> >> start = ALIGN_DOWN(start, SZ_1M); > >> >> size = ALIGN(size, SZ_1M); > >> >> > >> >> - create_sections(ttb, start, size >> 20, PMD_SECT_AP_WRITE | > >> >> + create_sections(ttb, start, size, PMD_SECT_AP_WRITE | > >> >> PMD_SECT_AP_READ | PMD_TYPE_SECT | PMD_SECT_WB); > >> >> } > >> >> > >> >> @@ -30,7 +30,7 @@ void mmu_early_enable(unsigned long membase, unsigned long memsize, > >> >> set_ttbr(ttb); > >> >> set_domain(DOMAIN_MANAGER); > >> >> > >> >> - create_sections(ttb, 0, 4096, PMD_SECT_AP_WRITE | > >> >> + create_sections(ttb, 0, SZ_4G, PMD_SECT_AP_WRITE | > >> >> PMD_SECT_AP_READ | PMD_TYPE_SECT); > >> >> > >> >> map_cachable(membase, memsize); > >> >> diff --git a/arch/arm/cpu/mmu.c b/arch/arm/cpu/mmu.c > >> >> index 0c367e47c..f02c99f65 100644 > >> >> --- a/arch/arm/cpu/mmu.c > >> >> +++ b/arch/arm/cpu/mmu.c > >> >> @@ -460,7 +460,7 @@ static int mmu_init(void) > >> >> set_domain(DOMAIN_MANAGER); > >> >> > >> >> /* create a flat mapping using 1MiB sections */ > >> >> - create_sections(ttb, 0, PAGE_SIZE, PMD_SECT_AP_WRITE | PMD_SECT_AP_READ | > >> >> + create_sections(ttb, 0, SZ_4G, PMD_SECT_AP_WRITE | PMD_SECT_AP_READ | > >> >> PMD_TYPE_SECT); > >> >> __mmu_cache_flush(); > >> >> > >> >> @@ -472,7 +472,7 @@ static int mmu_init(void) > >> >> * below > >> >> */ > >> >> for_each_memory_bank(bank) { > >> >> - create_sections(ttb, bank->start, bank->size >> 20, > >> >> + create_sections(ttb, bank->start, bank->size, > >> >> PMD_SECT_DEF_CACHED); > >> >> __mmu_cache_flush(); > >> >> } > >> >> diff --git a/arch/arm/cpu/mmu.h b/arch/arm/cpu/mmu.h > >> >> index d71cd7e38..52689359a 100644 > >> >> --- a/arch/arm/cpu/mmu.h > >> >> +++ b/arch/arm/cpu/mmu.h > >> >> @@ -27,16 +27,26 @@ static inline void set_domain(unsigned val) > >> >> } > >> >> > >> >> static inline void > >> >> -create_sections(uint32_t *ttb, unsigned long addr, > >> >> - int size_m, unsigned int flags) > >> >> +__create_sections(uint32_t *ttb, unsigned long first, > >> >> + unsigned long last, unsigned int flags) > >> >> { > >> >> - unsigned long ttb_start = addr >> 20; > >> >> - unsigned long ttb_end = ttb_start + size_m; > >> >> - unsigned int i; > >> >> + unsigned long ttb_start = first >> 20; > >> >> + unsigned long ttb_end = (last >> 20) + 1; > >> >> + unsigned int i, addr; > >> >> > >> >> - for (i = ttb_start; i < ttb_end; i++, addr += SZ_1M) > >> >> + for (i = ttb_start, addr = first; i < ttb_end; i++, addr += SZ_1M) > >> >> ttb[i] = addr | flags; > >> >> } > >> >> > >> >> +#define create_sections(ttb, addr, size, flags) \ > >> >> + ({ \ > >> >> + typeof(addr) __addr = addr; \ > >> >> + typeof(size) __size = size; \ > >> >> + /* Check for overflow */ \ > >> >> + BUG_ON(__addr > ULONG_MAX - __size + 1); \ > >> >> + __create_sections(ttb, __addr, __addr + (size) - 1, \ > >> >> + flags); \ > >> >> + }) > >> > > >> > Why do you preserve the original API of create_sections()? I would just > >> > change it. We have only a few callers and they are easy to change. > >> > > >> > >> Mostly because it keeps "addr + size - 1" arithmetic in one place > >> instead of spreading it to every caller. If you'd rather have that > >> instead of the macro above, I can change that in v3. > > > > I agree that the end address calculation often is a source for > > off-by-one errors, still I would prefer not using macros to hide that. > > > > I can't think of a way to keep arithmetic in one place and not to make > one of the arguments 64-bit at the same time without resorting to a > macro. AFAICT the options are either to drop the macro and change the > API or keep the API/arithmetic in one place and keep the macro. > > I am happy with either, so let me know what you'd prefer. I meant that the prototype of create_sections should be changed to: static void create_sections(uint32_t *ttb, unsigned long start, unsigned long end, unsigned int flags); No need for additional encapsulation, let's just change the callers. Sascha -- Pengutronix e.K. | | Industrial Linux Solutions | http://www.pengutronix.de/ | Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 | Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 | _______________________________________________ barebox mailing list barebox@xxxxxxxxxxxxxxxxxxx http://lists.infradead.org/mailman/listinfo/barebox