On Fri, Apr 12, 2024 at 10:03:14AM -0500, Jeremy Linton wrote: > Hi, > On 4/10/24 21:28, yunhui cui wrote: > > Hi Jeremy, > > > > On Wed, Apr 10, 2024 at 9:30 AM Jeremy Linton <jeremy.linton@xxxxxxx> wrote: > > > > > > Hi, > > > First thanks for working on this. > > > > > > On 4/7/24 07:38, Yunhui Cui wrote: > > > > When the type and level information of this_leaf cannot be obtained > > > > from arch, cacheinfo is completely filled in with the content of PPTT. > > > > > > I started reviewing this, based on what I understood to be the need to > > > generate the topology entirely from the PPTT. But, it was raising more > > > questions than answers because the PPTT is far too flexable in its > > > ability to represent cache hierachies that arn't logically useful. For > > > example multiple I or D caches at the same level, or I or D caches > > > higher in the topology than unified ones. > > > > > > At least for arm64 (and I think others) there is an understood > > > simplification that there will be N levels of split I/D caches and M > > > unified levels. And from that, the number of cache leaves are computed > > > and allocated, and then we go in and largly skip PPTT cache nodes which > > > don't make sense in view of a generic topology like that. (see the > > > comment in cacheinfo.c:506) > > > > > > Both of those pieces of information are available in > > > acpi_get_cache_info(). The missing part is marking those N levels of I/D > > > cache as such. > > > > > > Looking at this code I don't really see all the error/allocation > > > logic/etc that assures the cache leaf indexing is allocated correctly > > > which worries me, although admidditly I could be missing something > > > important. > > > > > > In summary, did you consider just allocating matching I/D caches from > > > the number of split levels in acpi_get_cache_info() then removing or > > > invalidating the ones that don't have matching PPTT entries after > > > running cache_setup_acpi()? Thats a fairly trivial change AFAIK if the > > > decision is based on the lack of a cache_id or just changing the > > > this_leaf->type = CACHE_TYPE_UNIFIED assignment to the correct type and > > > assuring left over CACHE_TYPE_NOCACHE entries are removed. I think much > > > of the "significant work" is likely fixed for that to work. Just > > > tweaking detect_cache_level()/get_cache_type() to set > > > CACHE_TYPE_SEPERATE if the level is less than the acpi_get_cache_info() > > > split_level value probably also does the majority of what you need > > > outside of having unequal counts of I and D caches. > > > > > > There are probably other choices as well, thoughts? > > > > > > > First, I think the current state of the ACPI PPTT specification meets > > the requirements and is logically complete. Otherwise, the PPTT > > specification needs to be updated, right? > > The specification is capable of representing all the actual physical > topologies I'm aware of, its also capable of representing quite a number of > topologies that don't make sense to (cant be) build and are largely > nonsense. There are some further details around replacement and allocation > which could be represented but were AFAIK not considered worth the effort > because it makes the representation far more complex. > > So, except for minor clarifications I'm not aware of a need to update it. > > > Our discussion is best focused on the existing and usual case, even on > > ARM64, which is as you say "N-level separated I/D cache, M-level > > unified". > > I understood this was for RISC-V when I reviewed it, but i'm fairly certain > the general cache topologies of RISC-V machines don't vary in their core > concepts and layout much from the wide variety of possibilities available > elsewhere. So its roughly the same problem, you will likely have one or two > layers of split I/D caches and then a number of unified private or shared > caches above that. > > > > > > And then, the problem we have now is that the RISC-V architecture does > > not have a set of registers to describe the cache level and type like > > ARM64 does, so we need to fully trust the contents of the PPTT table. > > Please check the patch: > > https://patchwork.kernel.org/project/linux-riscv/patch/20240407123829.36474-2-cuiyunhui@xxxxxxxxxxxxx/ > > I don't think "fully trust" is something you really want, or is helpful as I > suggested above. > > I suspect all that is needed is the construction of the missing I/D layers, > as the existing code will create the Unified layers above as it does today. > On Arm64 the unified system level caches aren't usually described by the > core cache registers either, which is why there is that existing bit of code > to change unknown caches to unified. But it also turns out the pptt walking > code is capable of providing the information to create the I/D layers as > well, if they are matched (ex there exists an I cache for every D cache at a > certain level). > > So, more clarity about the kinds of topologies you expect is helpful to > understand where the current code is failing. > > AKA, As I mentioned I'm fairly certain that its possible to construct the > entire cache topology from the PPTT by changing get_cache_type() to return > "CACHE_TYPE_SEPERATE" when the level is less than the value returned as > "split_level" from acpi_get_cache_info(). That tends to ignore some edge > cases, but I might put them in the "don't to that" bucket. > > I agree with Jeremy's analysis. Can you check if something like below will address your issue. Not compile tested but you get an idea of what we are trying to do here. Regards, Sudeep -->8 diff --git i/arch/riscv/kernel/cacheinfo.c w/arch/riscv/kernel/cacheinfo.c index 09e9b88110d1..92ab73ed5234 100644 --- i/arch/riscv/kernel/cacheinfo.c +++ w/arch/riscv/kernel/cacheinfo.c @@ -79,6 +79,27 @@ int populate_cache_leaves(unsigned int cpu) struct device_node *prev = NULL; int levels = 1, level = 1; + if (!acpi_disabled) { + int ret, fw_levels, split_levels; + + ret = acpi_get_cache_info(cpu, &fw_levels, &split_levels); + if (ret) + return ret; + + /* must be set, so we can drop num_leaves assignment below */ + this_cpu_ci->num_leaves = fw_levels + split_levels; + + for (idx = 0; level <= this_cpu_ci->num_levels && + idx < this_cpu_ci->num_leaves; idx++, level++) { + if (level <= split_levels) { + ci_leaf_init(this_leaf++, CACHE_TYPE_DATA, level); + ci_leaf_init(this_leaf++, CACHE_TYPE_INST, level); + } else { + ci_leaf_init(this_leaf++, CACHE_TYPE_UNIFIED, level); + } + } + } + if (of_property_read_bool(np, "cache-size")) ci_leaf_init(this_leaf++, np, CACHE_TYPE_UNIFIED, level); if (of_property_read_bool(np, "i-cache-size"))