Re: [RFC PATCH 6/6] block: implement NVMEM provider

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 21, 2023 at 12:30:10PM +0100, Daniel Golle wrote:
> On Fri, Jul 21, 2023 at 01:11:40PM +0200, Greg Kroah-Hartman wrote:
> > On Fri, Jul 21, 2023 at 11:40:51AM +0100, Daniel Golle wrote:
> > > On Thu, Jul 20, 2023 at 11:31:06PM -0700, Christoph Hellwig wrote:
> > > > On Thu, Jul 20, 2023 at 05:02:32PM +0100, Daniel Golle wrote:
> > > > > On Thu, Jul 20, 2023 at 12:04:43AM -0700, Christoph Hellwig wrote:
> > > > > > The layering here is exactly the wrong way around.  This block device
> > > > > > as nvmem provide has not business sitting in the block layer and being
> > > > > > keyed ff the gendisk registration.  Instead you should create a new
> > > > > > nvmem backed that opens the block device as needed if it fits your
> > > > > > OF description without any changes to the core block layer.
> > > > > > 
> > > > > 
> > > > > Ok. I will use a class_interface instead.
> > > > 
> > > > I'm not sure a class_interface makes much sense here.  Why does the
> > > > block layer even need to know about you using a device a nvmem provider?
> > > 
> > > It doesn't. But it has to notify the nvmem providing driver about the
> > > addition of new block devices. This is what I'm using class_interface
> > > for, simply to hook into .add_dev of the block_class.
> > 
> > Why is this single type of block device special to require this, yet all
> > others do not?  Encoding this into the block layer feels like a huge
> > layering violation to me, why not do it how all other block drivers do
> > it instead?
> 
> I was thinkng of this as a generic solution in no way tied to one specific
> type of block device. *Any* internal block device which can be used to
> boot from should also be usable as NVMEM provider imho.

Define "internal" :)

And that's all up to the boot process in userspace, the kernel doesn't
care about this.

> > > > As far as I can tell your provider should layer entirely above the
> > > > block layer and not have to be integrated with it.
> > > 
> > > My approach using class_interface doesn't require any changes to be
> > > made to existing block code. However, it does use block_class. If
> > > you see any other good option to implement matching off and usage of
> > > block devices by in-kernel users, please let me know.
> > 
> > Do not use block_class, again, that should only be for the block core to
> > touch.  Individual block drivers should never be poking around in it.
> 
> Do I have any other options to coldplug and be notified about newly
> added block devices, so the block-device-consuming driver can know
> about them?

What other options do you need?

> This is not a rhetoric question, I've been looking for other ways
> and haven't found anything better than class_find_device or
> class_interface.

Never use that, sorry, that's not for a driver to touch.

> Using those also prevents blk-nvmem to be built as
> a module, so I'd really like to find alternatives.
> E.g. for MTD we got struct mtd_notifier and register_mtd_user().

Your storage/hardware driver should be the thing that "finds block
devices" and registers them with the block class core, right?  After
that, what matters?

confused,

greg k-h



[Index of Archives]     [Linux Memonry Technology]     [Linux USB Devel]     [Linux Media]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux