Re: [RFC PATCH 2/2] cxl/mem: Add CDAT table reading from DOE

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 15 Mar 2021 15:00:08 -0700
Dan Williams <dan.j.williams@xxxxxxxxx> wrote:

> On Wed, Mar 10, 2021 at 10:15 AM Jonathan Cameron
> <Jonathan.Cameron@xxxxxxxxxx> wrote:
> >
> > On Thu, 11 Mar 2021 02:03:06 +0800
> > Jonathan Cameron <Jonathan.Cameron@xxxxxxxxxx> wrote:
> >  
> > > This patch simply provides some debug print outs of the entries
> > > at probe time + a sysfs binary attribute to allow dumping of the
> > > whole table.
> > >
> > > Binary dumping is modelled on /sys/firmware/ACPI/tables/
> > >
> > > The ability to dump this table will be very useful for emulation of
> > > real devices once they become available as QEMU CXL type 3 device
> > > emulation will be able to load this file in.
> > >
> > > Open questions:
> > > * No support here for table updates. Worth including these from the
> > >   start, or leave that complexity for later?
> > > * Worth logging the reported info for debug, or is the binary attribute
> > >   sufficient?  Larger open question of whether to expose this info to
> > >   userspace or not left for another day!
> > > * Where to put the CDAT file?  Is it worth a subdirectory?
> > > * What is maximum size of the SSLBIS entry - I haven't quite managed
> > >   to figure that out and this is the record with largest size.
> > >   We could support dynamic allocation of the record size, but it
> > >   would add complexity that seems unnecessary.
> > >   It would not be compliant with the specification for a type 3 memory
> > >   device to report this record anyway so I'm not that worried about this
> > >   for now.  It will become relevant once we have support for reading
> > >   CDAT from CXL switches.
> > > * cdat.h is formatted in a similar style to pci_regs.h on basis that
> > >   it may well be helpful to share this header with userspace tools.
> > > * Move the generic parts of this out to driver/cxl/cdat.c or leave that
> > >   until we have other CXL drivers wishing to use this?  
> >
> > Naturally I remembered another open question within 10 seconds of sending :(
> >
> >   * Do we want to add any sort of header to the RAW dump of CDAT to aid
> >     tooling?  Whilst it looks a little like an ACPI table it doesn't have
> >     a signature.
> >
> > My gut feeling is no, because the CDAT specification doesn't define one but
> > I can see that it might be very convenient to have something that identified
> > the data once it was put in a file.  
> 
> I'm not yet convinced raw dumping is worth it for the same reason that
> command payload logging was eliminated from the v5.12-rc1 submission.
> There's not much userspace can do with the information besides debug
> the kernel behavior. If the kernel assigns a numa node to target a
> given CXL memory range with NUMA apis then HMEM_REPORTING should
> enumerate the properties. In other words, don't expand the userspace
> ABI problem, funnel users to the canonical source for such data.

As someone who finds raw dumping of ACPI tables extremely helpful in every
day use for debugging of some of our 'interesting' hardware, I know I'm going
to end up carrying that element locally anyway.  I don't have a particular
problem doing so if we decide to not to upstream it.

Much like the ACPI table dumping, it's not an interface you expect userspace
to ever use and I fully agree that we should expose things properly as you
describe.

Short term my interest here is to get the DOE code upstream as step 1 of
moving to a full solution.  The printing and dumping is really just PoC element
to prove out the interface.  Any issue with putting the prints under _dbg()?

Jonathan





[Index of Archives]     [Linux IBM ACPI]     [Linux Power Management]     [Linux Kernel]     [Linux Laptop]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux