Re: RFC: Restricting userspace interfaces for CXL fabric management

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 5 Apr 2024 17:04:34 -0700
Dan Williams <dan.j.williams@xxxxxxxxx> wrote:

Hi Dan,

> Jonathan Cameron wrote:
> > Hi All,
> > 
> > This is has come up in a number of discussions both on list and in private,
> > so I wanted to lay out a potential set of rules when deciding whether or not
> > to provide a user space interface for a particular feature of CXL Fabric
> > Management.  The intent is to drive discussion, not to simply tell people
> > a set of rules.  I've brought this to the public lists as it's a Linux kernel
> > policy discussion, not a standards one.
> > 
> > Whilst I'm writing the RFC this my attempt to summarize a possible
> > position rather than necessarily being my personal view.
> > 
> > It's a straw man - shoot at it!
> > 
> > Not everyone in this discussion is familiar with relevant kernel or CXL concepts
> > so I've provided more info than I normally would.  
> 
> Thanks for writing this up Jonathan!
> 
> [..]
> > 2) Unfiltered userspace use of mailbox for Fabric Management - BMC kernels
> > ==========================================================================
> > 
> > (This would just be a kernel option that we'd advise normal server
> > distributions not to turn on. Would be enabled by openBMC etc)
> > 
> > This is fine - there is some work to do, but the switch-cci PCI driver
> > will hopefully be ready for upstream merge soon. There is no filtering of
> > accesses. Think of this as similar to all the damage you can do via
> > MCTP from a BMC. Similarly it is likely that much of the complexity
> > of the actual commands will be left to user space tooling: 
> > https://gitlab.com/jic23/cxl-fmapi-tests has some test examples.
> > 
> > Whether Kconfig help text is strong enough to ensure this only gets
> > enabled for BMC targeted distros is an open question we can address
> > alongside an updated patch set.  
> 
> It is not clear to me that this material makes sense to house in
> drivers/ vs tools/ or even out-of-tree just for maintenance burden
> relief of keeping the universes separated. What does the Linux kernel
> project get out of carrying this in mainline alongside the inband code?

I'm not sure what you mean by in band.  Aim here was to discuss
in-band drivers for switch CCI etc. Same reason from a kernel point of
view for why we include embedded drivers.  I'll interpret in band
as host driven and not inband as FM-API stuff.
 
> I do think the mailbox refactoring to support non-CXL use cases is
> interesting, but only so far as refactoring is consumed for inband use
> cases like RAS API.

If I read this right, I disagree with the 'only so far' bit.

In all substantial ways we should support BMC use case of the Linux Kernel
at a similar level to how we support forms of Linux Distros.  It may
not be our target market as developers for particular parts of our companies,
but we should not block those who want to support it.

We should support them in drivers/ - maybe with example userspace code
in tools.  Linux distros on BMCs is a big market, there are a number
of different distros using (and in some cases contributing to) the
upstream kernel. Not everyone is using openBMC so there is not one
common place where downstream patches could be carried.
>From a personal point of view, I like that for the same reasons that
I like there being multiple Linux sever focused distros. It's a sign
of a healthy ecosystem to have diverse options taking the mainline
kernel as their starting point.

BMCs are just another embedded market, and like other embedded markets
we want to encourage upstream first etc. 
openBMC has a policy on this:
https://github.com/openbmc/docs/blob/master/kernel-development.md
"The OpenBMC project maintains a kernel tree for use by the project.
The tree's general development policy is that code must be upstream
first." There are paths to bypass that for openBMC so it's a little
more relaxed than some enterprise distros (today, their policies used
to look very similar to this) but we should not be telling
them they need to carry support downstream.  If we are
going to tell them that, we need to be able to point at a major
sticking point for maintenance burden.  So far I don't see the
additional complexity as remotely close reaching that bar.

So I think we do want switch-cci support and for that matter the equivalent
for MHDs in the upstream kernel.

One place I think there is some wiggle room is the taint on use of raw
commands.  Leaving removal of that for BMC kernels as a patch they need
to carry downstream doesn't seem too burdensome. I'm sure they'll push
back if it is a problem for them!  So I think we can kick that question
into the future.

Addressing maintenance burden, there is a question of where we split
the stack.  Ignore MHDs for now (I won't go into why in this forum...)

The current proposal is (simplified to ignore some sharing in lookup code etc
that I can rip out if we think it might be a long term problem)

     _____________          _____________________
    |             |        |                     |
    | Switch CCI  |        |  Type 3 Driver stack| 
    |_____________|        |_____________________|
           |___________________________|              Whatever GPU etc
                  _______|_______                   _______|______
                 |               |                 |              |
                 |  CXL MBOX     |                 | RAS API etc  |
                 |_______________|                 |______________|
                             |_____________________________|
                                           |
                                  _________|______
                                 |                |
                                 |   MMPT mbox    |
                                 |________________|

Switch CCI Driver: PCI driver doing everything beyond the CXL mbox specific bit.
Type 3 Stack: All the normal stack just with the CXL Mailbox specific stuff factored
              out. Note we can move different amounts of shared logic in here, but
              in essence it deals with the extra layer on top of the raw MMPT mbox.
MMPT Mbox: Mailbox as per the PCI spec.
RAS API:   Shared RAS API specific infrastructure used by other drivers.

If we see a significant maintenance burden, maybe we duplicate the CXL specific
MBOX layer - I can see advantages in that as there is some stuff not relevant
to the Switch CCI.  There will be some duplication of logic however such
as background command support (which is CXL only IIUC)  We can even use
a difference IOCTL number so the two can diverge if needed in the long run.

e.g. If it makes it easier to get upstream, we can merrily duplicated code
so that only the bit common with RAS API etc is shared (assuming the
actually end up with MMPT, not the CXL mailbox which is what their current
publicly available spec talks about and I assume is a pref MMPT left over?)

     _____________          _____________________
    |             |        |                     |
    | Switch CCI  |        |  Type 3 Driver stack| 
    |_____________|        |_____________________|
           |                           |              Whatever GPU etc
    _______|_______             _______|_______        ______|_______
   |               |           |               |      |              |
   |  CXL MBOX     |           |  CXL MBOX     |      | RAS API etc  |
   |_______________|           |_______________|      |______________|
           |_____________________________|____________________|
                                         |
                                 ________|______
                                |               |
                                |   MMPT mbox   |
                                |_______________|


> > (On to the one that the "debate" is about)
> > 
> > 3) Unfiltered user space use of mailbox for Fabric Management - Distro kernels
> > =============================================================================
> > (General purpose Linux Server Distro (Redhat, Suse etc))
> > 
> > This is equivalent of RAW command support on CXL Type 3 memory devices.
> > You can enable those in a distro kernel build despite the scary config
> > help text, but if you use it the kernel is tainted. The result
> > of the taint is to add a flag to bug reports and print a big message to say
> > that you've used a feature that might result in you shooting yourself
> > in the foot.
> > 
> > The taint is there because software is not at first written to deal with
> > everything that can happen smoothly (e.g. surprise removal) It's hard
> > to survive some of these events, so is never on the initial feature list
> > for any bus, so this flag is just to indicate we have entered a world
> > where almost all bets are off wrt to stability.  We might not know what
> > a command does so we can't assess the impact (and no one trusts vendor
> > commands to report affects right in the Command Effects Log - which
> > in theory tells you if a command can result problems).  
> 
> That is a secondary reason that the taint is there. Yes, it helps
> upstream not waste their time on bug reports from proprietary use cases,
> but the effect of that is to make "raw" command mode unattractive for
> deploying solutions at scale. It clarifies that this interface is a
> debug-tool that enterprise environment need not worry about.
> 
> The more salient reason for the taint, speaking only for myself as a
> Linux kernel community member not for $employer, is to encourage open
> collaboration. Take firmware-update for example that is a standard
> command with known side effects that is inaccessible via the ioctl()
> path. It is placed behind an ABI that is easier to maintain and reason
> about. Everyone has the firmware update tool if they have the 'cat'
> command. Distros appreciate the fact that they do not need ship yet
> another vendor device-update tool, vendors get free tooling and end
> users also appreciate one flow for all devices.
> 
> As I alluded here [1], I am not against innovation outside of the
> specification, but it needs to be open, and it needs to plausibly become
> if not a de jure standard at least a de facto standard.
> 
> [1]: https://lore.kernel.org/all/CAPcyv4gDShAYih5iWabKg_eTHhuHm54vEAei8ZkcmHnPp3B0cw@xxxxxxxxxxxxxx/

Agree with all this.

> 
> > A concern was raised about GAE/FAST/LDST tables for CXL Fabrics
> > (a r3.1 feature) but, as I understand it, these are intended for a
> > host to configure and should not have side effects on other hosts?
> > My working assumption is that the kernel driver stack will handle
> > these (once we catch up with the current feature backlog!) Currently
> > we have no visibility of what the OS driver stack for a fabrics will
> > actually look like - the spec is just the starting point for that.
> > (patches welcome ;)
> > 
> > The various CXL upstream developers and maintainers may have
> > differing views of course, but my current understanding is we want
> > to support 1 and 2, but are very resistant to 3!  
> 
> 1, yes, 2, need to see the patches, and agree on 3.

If we end up with top architecture of the diagrams above, 2 will look pretty
similar to last version of the switch-cci patches.  So raw commands only + taint.
Factoring out MMPT is another layer that doesn't make that much difference in
practice to this discussion. Good to have, but the reuse here would be one layer
above that.

Or we just say go for second proposed architecture and 0 impact on the
CXL specific code, just reuse of the MMPT layer.  I'd imagine people will get
grumpy on code duplication (and we'll spend years rejecting patch sets that
try to share the cdoe) but there should be no maintenance burden as
a result.

> 
> > General Notes
> > =============
> > 
> > One side aspect of why we really don't like unfiltered userspace access to any
> > of these devices is that people start building non standard hacks in and we
> > lose the ecosystem advantages. Forcing a considered discussion + patches
> > to let a particular command be supported, drives standardization.  
> 
> Like I said above, I think this is not a side aspect. It is fundamental
> to the viability Linux as a project. This project only works because
> organizations with competing goals realize they need some common
> infrastructure and that there is little to be gained by competing on the
> commons.
> 
> > https://lore.kernel.org/linux-cxl/CAPcyv4gDShAYih5iWabKg_eTHhuHm54vEAei8ZkcmHnPp3B0cw@xxxxxxxxxxxxxx/
> > provides some history on vendor specific extensions and why in general we
> > won't support them upstream.  
> 
> Oh, you linked my writeup... I will leave the commentary I added here in case
> restating it helps.
> 
> > To address another question raised in an earlier discussion:
> > Putting these Fabric Management interfaces behind guard rails of some type
> > (e.g. CONFIG_IM_A_BMC_AND_CAN_MAKE_A_MESS) does not encourage the risk
> > of non standard interfaces, because we will be even less likely to accept
> > those upstream!
> > 
> > If anyone needs more details on any aspect of this please ask.
> > There are a lot of things involved and I've only tried to give a fairly
> > minimal illustration to drive the discussion. I may well have missed
> > something crucial.  
> 
> You captured it well, and this is open source so I may have missed
> something crucial as well.
> 

Thanks for detailed reply!

Jonathan







[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux