Re: [RFC PATCH v2 0/8] nvmem: add block device NVMEM provider

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ulf,

On Tue, Mar 12, 2024 at 01:22:49PM +0100, Ulf Hansson wrote:
> On Tue, 5 Mar 2024 at 21:23, Daniel Golle <daniel@xxxxxxxxxxxxxx> wrote:
> >
> > On embedded devices using an eMMC it is common that one or more (hw/sw)
> > partitions on the eMMC are used to store MAC addresses and Wi-Fi
> > calibration EEPROM data.
> >
> > Implement an NVMEM provider backed by block devices as typically the
> > NVMEM framework is used to have kernel drivers read and use binary data
> > from EEPROMs, efuses, flash memory (MTD), ...
> >
> > In order to be able to reference hardware partitions on an eMMC, add code
> > to bind each hardware partition to a specific firmware subnode.
> >
> > This series is meant to open the discussion on how exactly the device
> > tree schema for block devices and partitions may look like, and even
> > if using the block layer to back the NVMEM device is at all the way to
> > go -- to me it seemed to be a good solution because it will be reuable
> > e.g. for (normal, software GPT or MBR) partitions of an NVMe SSD.
> >
> > This series has previously been submitted on July 19th 2023[1] and most of
> > the basic idea did not change since.
> >
> > However, the recent introduction of bdev_file_open_by_dev() allow to
> > get rid of most use of block layer internals which supposedly was the
> > main objection raised by Christoph Hellwig back then.
> >
> > Most of the other comments received for in the first RFC have also
> > been addressed, however, what remains is the use of class_interface
> > (lacking an alternative way to get notifications about addition or
> > removal of block devices from the system). As this has been criticized
> > in the past I'm specifically interested in suggestions on how to solve
> > this in another way -- ideally without having to implement a whole new
> > way for in-kernel notifications of appearing or disappearing block
> > devices...
> >
> > And, in a way just like in case of MTD and UBI, I believe acting as an
> > NVMEM provider *is* a functionality which belongs to the block layer
> > itself and, other than e.g. filesystems, is inconvenient to implement
> > elsewhere.
> 
> I don't object to the above, however to keep things scalable at the
> block device driver level, such as the MMC subsystem, I think we
> should avoid having *any* knowledge about the binary format at these
> kinds of lower levels.
> 
> Even if most of the NVMEM format is managed elsewhere, the support for
> NVMEM partitions seems to be dealt with from the MMC subsystem too.

In an earlier iteration of this RFC it was requested to make NVMEM
support opt-in (instead of opt-out for mtdblock and ubiblock, which
already got their own NVMEM provider implementation).
Hence at least a change to opt-in for NVMEM support is required in the
MMC subsystem, together with making sure that MMC devices have their
fwnode assigned.

> Why can't NVMEM partitions be managed the usual way via the MBR/GPT?

Absolutely, maybe my wording was not clear, but that's exactly what
I'm suggesting here. There are no added parsers nor any knowledge
about binary formats in this patchset.

Or did I misunderstand your comment?




[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux