Re: [PATCH 02/15] libnvdimm: infrastructure for btt devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 23, 2015 at 3:19 AM, Christoph Hellwig <hch@xxxxxx> wrote:
> On Mon, Jun 22, 2015 at 12:02:54PM -0700, Dan Williams wrote:
>> I don't see the need to re-invent partitioning which is the path this
>> requested rework is putting us on...
>>
>> However, when the need arises for smaller granularity BTT we can have
>> the partition fight then.  To be clear, I believe that need is already
>> here today, but I'm not in a position to push that agenda at this late
>> date.
>
>
> Instead of all this complaining and moaning let's figure out what
> architecture you'd actually want.  The one I had in mind is:
>
> +------------------------------+
> |  block layer (& partitions)  |
> +---------------+--------------+--------------------+
> |  pmem driver  |  btt driver  |  other consumers   |
> +---------------+--------------+--------------------+
> |        pmem API through libnvdimm                 |
> +---------------------------------------------------+
>

I've got this mostly coded up.  The nice property is that BTTs now
become another flavor of the same namespace.

> If you really want btt to stack on top of pmem it really
> needs to be moved out entirely of libnvdimm and be a
> generic block driver just using ->rw_bytes, e.g.:
>
>
> +------------------------------+
> |  btt driver                  |
> +------------------------------+
> |  block layer (& partitions)  |
> +------------------------------+--------------------+
> |  pmem driver                 | other consumers    |
> +------------------------------+--------------------+
> |        pmem API through libnvdimm                 |
> +---------------------------------------------------+
>
> Not the current mess where btt pretends to be a stacking block
> driver but still ties into libnvdimm.

That tie was only to enable autodetect so that we don't need to run a
BTT assembly step from an initramfs just to get an NVDIMM up and
running.  It was a convenience, not a requirement.

> Add blk mode access to all the schemes, but it's really just
> another next to the pmem driver each time.  In fact while
> looking over the code a bit more I start to wonder why
> we need the blk driver at all - just hook into the nfit
> do_io routines instead of the low-level API based on what
> libnvdimm provides, and don't offer DAX for it.  It mostly
> seems duplicate code.

Mostly, it does handle dis-contiguous dimm-physical-address ranges,
but you're right we might be able to unify it in the coming cycle.
--
To unsubscribe from this list: send the line "unsubscribe linux-acpi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux IBM ACPI]     [Linux Power Management]     [Linux Kernel]     [Linux Laptop]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux