Re: RDMA subsystem namespace related questions (was Re: Finding the namespace of a struct ib_device)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/30/20 1:40 AM, Jason Gunthorpe wrote:
On Wed, Sep 30, 2020 at 12:57:48AM +0800, Ka-Cheong Poon wrote:
On 9/7/20 9:48 PM, Ka-Cheong Poon wrote:

This may require a number of changes and the way a client interacts with
the current RDMA framework.  For example, currently a client registers
once using one struct ib_client and gets device notifications for all
namespaces and devices.  Suppose there is rdma_[un]register_net_client(),
it may need to require a client to use a different struct ib_client to
register for each net namespace.  And struct ib_client probably needs to
have a field to store the net namespace.  Probably all those client
interaction functions will need to be modified.  Since the clients xarray
is global, more clients may mean performance implication, such as it takes
longer to go through the whole clients xarray.

There are probably many other subtle changes required.  It may turn out to
be not so straight forward.  Is this community willing the take such changes?
I can take a stab at it if the community really thinks that this is preferred.


Attached is a diff of a prototype for the above.  This exercise is
to see what needs to be done to have a more network namespace aware
interface for RDMA client registration.

An RDMA device is either in all namespaces or in a single
namespace. If a client has some interest in only some namespaces then
it should check the namespace during client registration and not
register if it isn't interested. No need to change anything in the
core code.


After the aforementioned check on a namespace, what can the client
do?  It still needs to use the existing ib_register_client() to
register with RDMA subsystem.  And after registration, it will get
notifications for all add/remove upcalls on devices not related
to the namespace it is interested in.  The client can work around
this if there is a supported way to find out the namespace of a
device, hence the original proposal of having rdma_dev_to_netns().


Is the RDMA shared namespace mode the preferred mode to use as it is the
default mode?

Shared is the legacy mode, modern systems should switch to namespace
mode at early boot


Thanks for the clarification.  I originally thought that the shared
mode was for supporting a large number of namespaces.  In the
exclusive mode, a device needs to be assigned to a namespace for
that namespace to use it.  If there are a large number of namespaces,
there won't be enough devices to assign to all of them (e.g. the
hardware I have access to only supports up to 24 VFs).  The shared
mode can be used in this case.  Could you please explain what needs
to be done to support a large number of namespaces in exclusive
mode?

BTW, if exclusive mode is the future, it may make sense to have
something like rdma_[un]register_net_client().


Is it expected that a client knows the running mode before
interacting with the RDMA subsystem?

Why would a client care?


Because it may want to behave differently.  For example, in shared
mode, it may want to create shadow device structure to hold per
namespace info for a device.  But in exclusive mode, a device can
only be in one namespace, there is no need to have such shadow
device structure.


Is a client not supposed to differentiate different namespaces?

None do today.


This is probably the case as calling rdma_create_id() in kernel can
disallow a namespace to be deleted.  There must be no client doing
that right now.  My code is using RDMA in a namespace, hence I'd
like to understand more about the RDMA subsystem's namespace support.
For example, what is the reason that the cma_wq is a global queue
shared by all namespaces instead of per namespace?  Is it expected
that the work load will be low enough for all namespaces such that
they will not interfere with each other?


A new connection comes in and the event handler is called for an
RDMA_CM_EVENT_CONNECT_REQUEST event.  There is no obvious namespace info regarding
the event.  It seems that the only way to find out the namespace info is to
use the context of struct rdma_cm_id.

The rdma_cm_id has only a single namespace, the ULP knows what it is
because it created it. A listening ID can't spawn new IDs in different
namespaces.


The problem is that the handler is not given the listener's
rdma_cm_id when it is called.  It is only given the new rdma_cm_id.
Do you mean that there is a way to find out the listener's rdma_cm_id
given the new rdma_cm_id?  But even if the listener's rdma_cm_id can
be found, what is the mechanism to find out the namespace of that
listener's namespace in the handler?  The client may compare that
pointer with every listeners it creates.  Is there a better way?


(*) Note that in __rdma_create_id(), it does a get_net(net) to put a
     reference on a namespace.  Suppose a kernel module calls rdma_create_id()
     in its namespace .init function to create an RDMA listener and calls
     rdma_destroy_id() in its namespace .exit function to destroy it.

Yes, namespaces remain until all objects touching them are deleted.

It seems like a ULP error to drive cm_id lifetime entirely from the
per-net stuff.


It is not an ULP error.  While there are many reasons to delete
a listener, it is not necessary for the listener to die unless the
namespace is going away.


This would be similar to creating a socket in the kernel.


Right and a kernel socket does not prevent a namespace to be deleted.


     __rdma_create_id() adds a reference to a namespace, when a sys admin
     deletes a namespace (say `ip netns del ...`), the namespace won't be
     deleted because of this reference.  And the module will not release this
     reference until its .exit function is called only when the namespace is
     deleted.  To resolve this issue, in the diff (in __rdma_create_id()), I
     did something similar to the kern check in sk_alloc().

What you are running into is there is no kernel user of net
namespaces, all current ULPs exclusively use the init_net.

Without an example of what that is supposed to be like it is hard to
really have a discussion. You should reference other TCP in the kernel
to see if someone has figured out how to make this work for TCP. It
should be basically the same.


The kern check in sk_alloc() decides whether to hold a reference on
the namespace.  What is in the diff follows the same mechanism.


--
K. Poon
ka-cheong.poon@xxxxxxxxxx





[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux