On Thu, Feb 10, 2022 at 12:44:04AM +0100, Heiko Stübner wrote: > Hi, > > Am Mittwoch, 9. Februar 2022, 18:49:19 CET schrieb Jisheng Zhang: > > On Wed, Feb 09, 2022 at 01:37:46PM +0100, Heiko Stuebner wrote: > > > Svpbmt is an extension defining "Supervisor-mode: page-based memory types" > > > for things like non-cacheable pages or I/O memory pages. > > > > > > > > > So this is my 2nd try at implementing Svpbmt (and the diverging D1 memory > > > types) using the alternatives framework. > > > > > > This includes a number of changes to the alternatives mechanism itself. > > > The biggest one being the move to a more central location, as I expect > > > in the future, nearly every chip needing some sort of patching, be it > > > either for erratas or for optional features (svpbmt or others). > > > > > > The dt-binding for svpbmt itself is of course not finished and is still > > > using the binding introduced in previous versions, as where to put > > > a svpbmt-property in the devicetree is still under dicussion. > > > Atish seems to be working on a framework for extensions [0], > > > > > > The series also introduces support for the memory types of the D1 > > > which are implemented differently to svpbmt. But when patching anyway > > > it's pretty clean to add the D1 variant via ALTERNATIVE_2 to the same > > > location. > > > > > > The only slightly bigger difference is that the "normal" type is not 0 > > > as with svpbmt, so kernel patches for this PMA type need to be applied > > > even before the MMU is brought up, so the series introduces a separate > > > stage for that. > > > > > > > > > In theory this series is 3 parts: > > > - sbi cache-flush / null-ptr > > > - alternatives improvements > > > - svpbmt+d1 > > > > > > So expecially patches from the first 2 areas could be applied when > > > deemed ready, I just thought to keep it together to show-case where > > > the end-goal is and not requiring jumping between different series. > > > > > > > > > The sbi cache-flush patch is based on Atish's sparse-hartid patch [1], > > > as it touches a similar area in mm/cacheflush.c > > > > > > > > > I picked the recipient list from the previous version, hopefully > > > I didn't forget anybody. > > > > > > changes in v6: > > > - rebase onto 5.17-rc1 > > > - handle sbi null-ptr differently > > > - improve commit messages > > > - use riscv,mmu as property name > > > > > > changes in v5: > > > - move to use alternatives for runtime-patching > > > > another choice is using static key mechanism. Pros: no need to coding > > in asm, all in c. > > > > To support new arch features, I see other arch sometimes use static > > key, sometimes use alternative mechanism, so one question here would > > be which mechanism is better? Any guide? > > For me it's also a bit of a learn-as-you-go experience, but I do see some I hope old hands can give some suggestions here about static key VS. alternative ;). When to use which mechanism, and why. > advantages in using alternatives: > > - Static keys need the jump-label infrastructure, which the RiscV kernel > only seems to provide on non-XIP kernels [0] I think you found one bug here. I believe alternative mechanism also doesn't work for XIP kernel. I will submit a patch for this case. > - the amount of asm here is somewhat minimal for the core no-cache and io > types (load immediate + shift) > - using the static key mechanism still does incur more overhead for the > conditional do you mean the icache overhead due to the other disabled branch of static key? It deserves a check. > - and if we want to support the strange family-members like the D1, > (and it seems we do want that) this would create more conditionals Maybe implement the standard svpbmt via. static key and cope with D1 as errata alternative. > as we have to test for svpbmt, d1 and maybe future special cases,