On Wed, Jan 3, 2024 at 8:02 PM Dmitry Baryshkov <dmitry.baryshkov@xxxxxxxxxx> wrote: > > HI Rob, > > Resurrecting old thread, but I think it's better as it has context. > > Added driver core maintainers, see discussion points below. > > On Wed, 11 Oct 2023 at 21:44, Rob Herring <robh@xxxxxxxxxx> wrote: > > > > On Wed, Oct 11, 2023 at 1:20 PM Dmitry Baryshkov > > <dmitry.baryshkov@xxxxxxxxxx> wrote: > > > > > > On Wed, 11 Oct 2023 at 18:49, Rob Herring <robh@xxxxxxxxxx> wrote: > > > > > > > > On Sun, Aug 27, 2023 at 02:50:18PM +0300, Dmitry Baryshkov wrote: > > > > > Add a simple driver that handles scaling of L2 frequency and voltages. > > > > > > > > > > Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@xxxxxxxxxx> > > > > > --- > > > > > > > > [...] > > > > > > > > > +static const struct of_device_id krait_l2_match_table[] = { > > > > > + { .compatible = "qcom,krait-l2-cache" }, > > > > > + {} > > > > > +}; > > > > > +MODULE_DEVICE_TABLE(of, krait_l2_match_table); > > > > > + > > > > > +static struct platform_driver krait_l2_driver = { > > > > > + .probe = krait_l2_probe, > > > > > + .remove = krait_l2_remove, > > > > > + .driver = { > > > > > + .name = "qcom-krait-l2", > > > > > + .of_match_table = krait_l2_match_table, > > > > > + .sync_state = icc_sync_state, > > > > > + }, > > > > > +}; > > > > > > > > As I mentioned in the other thread, cache devices already have a struct > > > > device. Specifically, they have a struct device (no subclass) on the > > > > cpu_subsys bus type. So there should be no need for a platform device > > > > and second struct device. > > > > > > > > See drivers/acpi/processor_driver.c for an example. Or grep any use of > > > > "cpu_subsys". > > > > > > Most likely you mean drivers/base/cacheinfo.c. I saw this code, I > > > don't think it makes a good fit here. The cacheinfo devices provide > > > information only, they are not tied to DT nodes in any way. > > > > They are completely tied to DT nodes beyond L1. > > > > > cpu_subsys > > > doesn't provide a way to match drivers with subsys devices in the > > > non-ACPI case, etc. > > > > That's a 2 line addition to add DT support. > > > > > Moreover, the whole cacheinfo subsys is > > > non-existing on arm32, there is no cacheinfo implementation there, > > > thanks to the overall variety of architectures. > > > > Humm, well I don't think it would be too hard to add, but I won't ask > > you to do that. All the info comes from DT or can come from DT, so it > > should be just a matter of arm32 calling the cacheinfo init. > > > > > Thus said, I don't think cacheinfo makes a good fit for the case of > > > scaling L2 cache. > > > > I still disagree. It's not really cacheinfo. That is what creates the > > devices, but it's the cpu_subsys bus type. Why do you care that it is > > platform bus vs. cpu_subsys? > > I finally found a timeslot to look at cacheinfo. I added support for > arm32 cacheinfo (which is fine) and tried using cacheinfo devices for > L2 driver mapping (the RFC has been posted at [1]). > But after I actually tried using it for the L2 cache driver. I > stumbled upon several issues, which I'd like to discuss before rushing > to code. > > First, you supposed that cacheinfo devices land onto the cpu_subsys > bus. However only actual CPU devices end up on cpu_subsys. CPU cache > devices are created using cpu_device_create(), but despite its name > they don't go to cpu_subsys. > > Second and more important, these devices are created without any > attempt to share them. So on a 4-core system I have 4 distinct devices > for L2 cache even though it is shared between all cores. I wonder if that's because things are created in CPU hotplug callbacks and there might be ordering problems if cache devices are created in another code path. Also, I think on some PowerPC systems, CPUs can move to different L2 (or L3?) caches when hot unplugged and then plugged. So hotplug rescans everything. I don't think that would be a problem with this and PowerPC does its own scanning anyways. Just wanted you to be aware of the issue. > root@qcom-armv7a:~# stat -c "%N %i" /sys/bus/cpu/devices/cpu*/cache/index2/level > /sys/bus/cpu/devices/cpu0/cache/index2/level 15537 > /sys/bus/cpu/devices/cpu1/cache/index2/level 15560 > /sys/bus/cpu/devices/cpu2/cache/index2/level 15583 > /sys/bus/cpu/devices/cpu3/cache/index2/level 15606 > > I think it makes sense to rework cacheinfo to create actual CPU cache > devices (maybe having a separate cache bus). > In my case it should become something like: > > cpu0-2-unified (shared between all 4 cores) > cpu0-1-icache > cpu0-1-dcache > cpu1-1-icache > cpu1-1-dcache > ... > > I'm not sure if it's worth supporting more than one instance of the > same kind per level (e.g. I think current cacheinfo has nothing > against having two I-cache or two D-cache devices) Probably a safe assumption. Though I think old XScale CPUs had a 1K mini I-cache and the main L1 I-cache. I guess that's really an L0 cache though. > The cpuN/cache/indexM should become symlinks to those cache devices. > > What do you think? Seems like a good improvement to me if changing the current way doesn't cause an ABI issue. > [1] https://lore.kernel.org/linux-arm-msm/CAA8EJppCRzknaujKFyLa_i7x4UnX31YFSyjtux+zJ0harixrbA@xxxxxxxxxxxxxx > > > On a separate issue, I'd propose you move this to drivers/cache/ > > instead of the dumping ground that is drivers/soc/. It's nothing more > > than a location to collect cache related drivers ATM because we seem > > to be accumulating more of them. > > I thought about reusing drivers/devfreq, it already has the Mediatek CCI driver. Anywhere except drivers/misc/ would be an improvement over drivers/soc/. devfreq is more tied to interconnects than caches though. Rob