On Fri, Mar 29, 2019 at 9:27 AM Borislav Petkov <bp@xxxxxxxxx> wrote: > > On Fri, Mar 29, 2019 at 09:11:24AM -0500, Rob Herring wrote: > > I honestly don't understand the issue with EDAC is here. > > The EDAC core supports only one driver and if you need to load more, you > need to dance around that. > > Also, if those drivers need to talk amongst each other, then they need > to build something ad-hoc so that they can. > > And the other architectures can very well do one driver per platform - > only ARM wants to do this special thing because DT said so. Or whatever. > > > Highbank is separate drivers for L2 ECC (PL310) and DDR. Both are used > > on highbank. > > That's because your L2 driver does allocate an edac_device > (edac_device_alloc_ctl_info()) and the DDR one an edac_mc > (edac_mc_add_mc_with_groups). > > For example, altera_edac does edac_device_alloc_ctl_info() for each IP > block just fine. So a single driver *can* work. > > > Only the DDR driver is used midway. (I think we never got around to > > how to report A15 L2 ECC errors within Linux.) > > > > In any case, it's all irrelevant to the DT binding. We don't design > > bindings around what some particular OS wants. > > And just because DT dictates one driver per IP block, I'm not going to > redesign EDAC to fit that scheme. You or someone else who feels strongly > about it, is more than welcome to do so, of course. And then maintain it > too. DT dictates aligning with what the h/w looks like which has little to do with OS driver design. I never said you should change EDAC and I outlined how things should be handled if it is one driver. DT and OS subsystems are independent things. I can't tell you how to design the subsystem and you can't dictate DT design (based on EDAC design). Rob