Re: [PATCH 2/2] pwm: sifive: Add a driver for SiFive SoC PWM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jan 17, 2019 at 09:19:56AM +0100, Uwe Kleine-König wrote:
> Hello Paul,
> 
> On Wed, Jan 16, 2019 at 11:29:35AM -0800, Paul Walmsley wrote:
> > COMPILE_TEST made slightly more sense before the broad availability of 
> > open-source soft cores, SoC integration, and IP; and before powerful, 
> > inexpensive FPGAs and SoCs with FPGA fabrics were common.
> > 
> > Even back then, COMPILE_TEST was starting to look questionable.  IP blocks 
> > from CPU- and SoC vendor-independent libraries, like DesignWare and the 
> > Cadence IP libraries, were integrated on SoCs across varying CPU 
> > architectures.  (Fortunately, looking at the tree, subsystem maintainers 
> > with DesignWare drivers seem to have largely avoided adding architecture 
> > or SoC-specific Kconfig restrictions there - and thus have also avoided 
> > the need for COMPILE_TEST.)  If an unnecessary architecture Kconfig 
> > dependency was added for a leaf driver, that Kconfig line would either 
> > need to be patched out by hand, or if present, COMPILE_TEST would need to 
> > be enabled.
> > 
> > This was less of a problem then.  There were very few FPGA Linux users, 
> > and they were mostly working on internal proprietary projects. FPGAs that 
> > could run Linux at a reasonable speed, including MMUs and FPUs, were 
> > expensive or were missing good tool support.  So most FPGA Linux 
> > development was restricted to ASIC vendors - the Samsungs, Qualcomms, 
> > NVIDIAs of the world - for prototyping, and those FPGA platforms never 
> > made it outside those companies.
> > 
> > The situation is different now.  The major FPGA vendors have inexpensive 
> > FPGA SoCs with full-featured CPU hard macros.  The development boards can 
> > be quite affordable (< USD 300 for the Xilinx Ultra96).  There are also 
> > now open-source SoC HDL implementations - including MMUs, FPUs, and 
> > peripherals like PWM and UART - for the conventional FPGA market.  These 
> > can run at mid-1990's clock rates - slow by modern standards but still 
> > quite usable.
> 
> In my eyes it's better to make a driver not possible to enable out of
> the box than offering to enable it while it most probably won't be used.

This might sound like a good idea in general, but it's also something
that is pretty much impossible to do. There's just no heuristic that
would allow you to determine with a sufficient degree of probability
that a driver won't be needed. Most of the PCI drivers that are
installed on my workstation aren't used, and I would venture to say
there are a lot of drivers that aren't used in, say, 95% of
installations. That doesn't mean that we should somehow make these
drivers difficult to install, or require someone to patch the kernel
to build them.

> People who configure their own kernel and distribution kernel
> maintainers will thank you. (Well ok, they won't notice, so they won't
> actually tell you, but anyhow.) I'm a member of the Debian kernel team
> and I see how many config items there are that are not explicitly
> handled for the various different kernel images. If they were restricted
> using COMPILE_TEST to just be possible to enable on machines where it is
> known (or at least likely) to make sense that would help. Also when I do
> a kernel version bump for a machine with a tailored kernel running (say)
> on an ARM/i.MX SoC, I could more easily be careful about the relevant
> changes when doing oldconfig if I were not asked about a whole bunch of
> drivers that are sure to be irrelevant.

I think the important thing here is that the driver is "default n". If
you're a developer and build your own kernels, you're pretty likely to
know already if a new kernel that you're installing contains that new
driver that you've been working on or waiting for. In that case you can
easily just enable it manually rather than go through make oldconfig and
wait for it to show up.

As for distribution kernel maintainers, are they really still building a
lot of different kernels? If so, it sounds like most of the multi-
platform efforts have been in vain. I would imagine that if, as a
distribution kernel team, you'd want to carefully evaluate for every
driver whether or not you'd want to include it. I would also imagine
that you'd want to provide your users with the most flexible kernel
possible, so that if they do have a system with an FPGA that you'd make
it possible for them to use pwm-sifive if they choose to synthesize it.

If you really want to create custom builds tailored to a single chip or
board, it's going to be a fair amount of work anyway. I've occasionally
done that in the past and usually just resorted to starting from an
allnoconfig configuration and then working my way up from there.

As for daily work as a developer, when I transition between kernel
versions, or from one linux-next to another, I typically end up doing
the manual equivalent of:

	$ yes '' | make oldconfig

if I know that I'm not interested in any new features. But I also often
just look at what's new because it's interesting to see what's been
going on elsewhere.

Thierry

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux