On Thu, Dec 19, 2013 at 02:01:26PM +0000, Arnd Bergmann wrote: > On Thursday 19 December 2013, Graeme Gregory wrote: > > Hopefully the documenation of what real armv8 server architecture will look > > like will come in the new year. Things like regulators and clocks I do not > > have answers to yet as obviously in Intel world these things are hidden > > from view, I do not know what the plan is for armv8 silicon/motherboards. > > The clocks and regulators (and a handful of other subsystems) are > the key thing to work out IMHO. For all I know these are either completely > static (turned on by firmware at boot time) on current servers, or they > are done in a way that each device can manage itself using power states > in the PCI configuration space. If you have on-chip devices that do not > look like PCI devices to software, or that interact with other on-chip > controllers at run-time as on typical arm32 embedded SoCs, you are in > trouble to start with, and there are two possible ways to deal with this > in theory: > > a) Hide all the register-level setup behind AML code and make Linux only > aware of the possible device states that it can ask for, which would > make this look similar to today's servers. > > b) Model all the soc-internal registers as devices and write OS-specific > SoC-specific device drivers for them, using yet-to-be-defined ACPI > extensions to describe the interactions between devices. This would > be modeled along the lines of what we do today with DT, and what Intel > wants to do on their embedded SoCs with ACPI in the future. > > I think anybody would agree that we should not try to mix the two models > in a single system, as that would create an endless source of bugs when > you have two drivers fighting over the same hardware. There is also a > rough consensus that we really only want a) and not b) on ARM, but there > have been indications that people are already working on b), which I > think is a bit worrying. I would argue that anyone who wants b) on > ARM should not use ACPI at all but rather describe the hardware using > DT as we do today. This could possibly change if someone shows that a) > is actually not a realistic model at all, but I also think that doing b) > properly will depend on doing a major ACPI-6.0 or ACPI-7.0 release > to actually specify a standard model for the extra subsystems. I'm inclined to say that (ARM) Linux should only support stuff captured in an ACPI spec but I'm not familiar enough with this to assess its feasibility. Choosing between a) and b) depends when where you place the maintenance burden. Point a) pretty much leaves this with the hw vendors. They get a distro with a kernel supporting ACPI-x and (PCI) device drivers they need but other SoC specific is handled by firmware or AML. It is their responsibility to work on firmware and AML until getting it right without changing the kernel (well, unless they find genuine bugs with the code). Point b) is simpler for kernel developers as we know how to debug and maintain kernel code but I agree with you that we should rather use FDT here than duplicate the effort just for the sake of ACPI. Waiting for OS distros and vendors to clarify but I think RH are mainly looking at a). My (mis)understanding is based based on pro-ACPI arguments I heard like being able to use newer hardware with older kernels (and b) would always require new SoC drivers and bindings). -- Catalin -- To unsubscribe from this list: send the line "unsubscribe linux-acpi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html