Re: [PATCH] Delay the initialization of rnbd_server module to late_initcall level

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 23, 2020 at 05:05:51PM +0530, Haris Iqbal wrote:
> On Tue, Jun 23, 2020 at 7:54 PM Jason Gunthorpe <jgg@xxxxxxxx> wrote:
> >
> > On Tue, Jun 23, 2020 at 07:15:03PM +0530, Haris Iqbal wrote:
> > > On Tue, Jun 23, 2020 at 5:47 PM Jason Gunthorpe <jgg@xxxxxxxx> wrote:
> > > >
> > > > On Tue, Jun 23, 2020 at 03:20:27PM +0530, Haris Iqbal wrote:
> > > > > Hi Jason and Leon,
> > > > >
> > > > > Did you get a chance to look into my previous email?
> > > >
> > > > Was there a question?
> > >
> > > Multiple actually :)
> > >
> > > >
> > > > Jason
> > >
> > > In response to your emails,
> > >
> > > > Somehow nvme-rdma works:
> > >
> > > I think that's because the callchain during the nvme_rdma_init_module
> > > initialization stops at "nvmf_register_transport()". Here only the
> > > "struct nvmf_transport_ops nvme_rdma_transport" is registered, which
> > > contains the function "nvme_rdma_create_ctrl()". I tested this in my
> > > local setup and during kernel boot, that's the extent of the
> > > callchain.
> > > The ".create_ctrl"; which now points to "nvme_rdma_create_ctrl()" is
> > > called later from "nvmf_dev_write()". I am not sure when this is
> > > called, probably when the "discover" happens from the client side or
> > > during the server config.
> > >
> > > It seems that the "rdma_bind_addr()" is called by the nvme rdma
> > > module; but during the following events
> > > 1) When a discover happens from the client side. Call trace for that looks like,
> > > [ 1098.409398] nvmf_dev_write
> > > [ 1098.409403] nvmf_create_ctrl
> > > [ 1098.414568] nvme_rdma_create_ctrl
> > > [ 1098.415009] nvme_rdma_setup_ctrl
> > > [ 1098.415010] nvme_rdma_configure_admin_queue
> > > [ 1098.415010] nvme_rdma_alloc_queue
> > > [ 1098.415032] rdma_resolve_addr
> > > [ 1098.415032] cma_bind_addr
> > > [ 1098.415033] rdma_bind_addr
> > >
> > > 2) When a connect happens from the client side. Call trace is the same
> > > as above, plus "nvme_rdma_alloc_queue()" is called n number of times;
> > > n being the number of IO queues being created.
> > >
> > > On the server side, when an nvmf port is enabled, that also triggers a
> > > call to "rdma_bind_addr()", but that is not from the nvme rdma module.
> > > may be nvme target rdma? (not sure).
> > >
> > > Does this make sense or am I missing something here?
> >
> > It make sense, delaying creating and CM ID's until user space starts
> > will solve this init time problme
> 
> Right, and the patch is trying to achieve the delay by changing the
> init level to "late_initcall()"

It should not be done with initcall levels

> > Right rdma_create_id() must precede anything that has problems, and it
> > should not be done from module_init.
> 
> I understand this, but I am not sure why that is; as in why it should
> not be done from module_init?

Because that is how our module ordering scheme works

> > It is not OK to create RDMA CM IDs outside
> > a client - CM IDs are supposed to be cleaned up when the client is
> > removed.
> >
> > Similarly they are supposed to be created from the client attachment.
> 
> This again is a little confusing to me, since what I've observed in
> nvmt is, when a server port is created, the "rdma_bind_addr()"
> function is called.
> And this goes well with the server/target and client/initiator model,
> where the server has to get ready and start listening before a client
> can initiate a connection.
> What am I missing here?

client means a struct ib_client

Jason

> 
> >
> > Jason
> 



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux