On Sun, Jan 27, 2019 at 3:10 AM Leon Romanovsky <leon@xxxxxxxxxx> wrote: > From: Daniel Jurgens <danielj@xxxxxxxxxxxx> > > When creating many MAD agents in a short period of time receive packet > processing can be delayed long enough to cause timeouts while new agents > are being added to the atomic notifier chain with IRQs disabled. > Notifier chain registration and unregstration is an O(n) operation. With > large numbers of MAD agents being created and destroyed simultaneously > the CPUs spend too much time with interrupts disabled. > > After this change previously granted access for MAD agents will not be > revoked if there is a relevant security policy change. This behavior is > already the case for most things controlled by a security policy. > > Signed-off-by: Daniel Jurgens <danielj@xxxxxxxxxxxx> > Signed-off-by: Leon Romanovsky <leonro@xxxxxxxxxxxx> > --- > drivers/infiniband/core/security.c | 34 ++++-------------------------- > include/rdma/ib_mad.h | 3 --- > 2 files changed, 4 insertions(+), 33 deletions(-) Perhaps predictably, I'm not very excited about this change. Have you looked closer into the slowdown to see where the cycles are being spent? I'm wondering if the issue is that a large number of notifiers are being registered with the same priority causing the while loop in notifier_chain_register() to take a significant amount of time. > diff --git a/drivers/infiniband/core/security.c b/drivers/infiniband/core/security.c > index 1efadbccf394..73598acb518a 100644 > --- a/drivers/infiniband/core/security.c > +++ b/drivers/infiniband/core/security.c > @@ -676,21 +676,6 @@ static int ib_security_pkey_access(struct ib_device *dev, > return security_ib_pkey_access(sec, subnet_prefix, pkey); > } > > -static int ib_mad_agent_security_change(struct notifier_block *nb, > - unsigned long event, > - void *data) > -{ > - struct ib_mad_agent *ag = container_of(nb, struct ib_mad_agent, lsm_nb); > - > - if (event != LSM_POLICY_CHANGE) > - return NOTIFY_DONE; > - > - ag->smp_allowed = !security_ib_endport_manage_subnet( > - ag->security, dev_name(&ag->device->dev), ag->port_num); > - > - return NOTIFY_OK; > -} > - > int ib_mad_agent_security_setup(struct ib_mad_agent *agent, > enum ib_qp_type qp_type) > { > @@ -710,16 +695,9 @@ int ib_mad_agent_security_setup(struct ib_mad_agent *agent, > dev_name(&agent->device->dev), > agent->port_num); > if (ret) > - return ret; > + security_ib_free_security(agent->security); > > - agent->lsm_nb.notifier_call = ib_mad_agent_security_change; > - ret = register_lsm_notifier(&agent->lsm_nb); > - if (ret) > - return ret; > - > - agent->smp_allowed = true; > - agent->lsm_nb_reg = true; > - return 0; > + return ret; > } > > void ib_mad_agent_security_cleanup(struct ib_mad_agent *agent) > @@ -728,8 +706,6 @@ void ib_mad_agent_security_cleanup(struct ib_mad_agent *agent) > return; > > security_ib_free_security(agent->security); > - if (agent->lsm_nb_reg) > - unregister_lsm_notifier(&agent->lsm_nb); > } > > int ib_mad_enforce_security(struct ib_mad_agent_private *map, u16 pkey_index) > @@ -737,11 +713,9 @@ int ib_mad_enforce_security(struct ib_mad_agent_private *map, u16 pkey_index) > if (!rdma_protocol_ib(map->agent.device, map->agent.port_num)) > return 0; > > - if (map->agent.qp->qp_type == IB_QPT_SMI) { > - if (!map->agent.smp_allowed) > - return -EACCES; > + /* SMI agent enforcement is done during agent creation */ > + if (map->agent.qp->qp_type == IB_QPT_SMI) > return 0; > - } > > return ib_security_pkey_access(map->agent.device, > map->agent.port_num, > diff --git a/include/rdma/ib_mad.h b/include/rdma/ib_mad.h > index fdef558e3a2d..12543e09c3ed 100644 > --- a/include/rdma/ib_mad.h > +++ b/include/rdma/ib_mad.h > @@ -619,9 +619,6 @@ struct ib_mad_agent { > u8 port_num; > u8 rmpp_version; > void *security; > - bool smp_allowed; > - bool lsm_nb_reg; > - struct notifier_block lsm_nb; > }; > > /** > -- > 2.19.1 -- paul moore www.paul-moore.com