On 27/07/2020 21:56, Jason Gunthorpe wrote: > On Wed, Jul 22, 2020 at 05:03:11PM +0300, Gal Pressman wrote: >> Introduce a mechanism that performs an handshake between the userspace >> provider and kernel driver which verifies that the user supports all >> required features in order to operate correctly. >> >> The handshake verifies the needed functionality by comparing the >> reported device caps and the provider caps. If the device reports a >> non-zero capability the appropriate comp mask is required from the >> userspace provider in order to allocate the context. >> >> Reviewed-by: Shadi Ammouri <sammouri@xxxxxxxxxx> >> Reviewed-by: Yossi Leybovich <sleybo@xxxxxxxxxx> >> Signed-off-by: Gal Pressman <galpress@xxxxxxxxxx> >> drivers/infiniband/hw/efa/efa_verbs.c | 40 +++++++++++++++++++++++++++ >> include/uapi/rdma/efa-abi.h | 10 +++++++ >> 2 files changed, 50 insertions(+) >> >> diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c >> index 26102ab333b2..fda175836fb6 100644 >> +++ b/drivers/infiniband/hw/efa/efa_verbs.c >> @@ -1501,11 +1501,39 @@ static int efa_dealloc_uar(struct efa_dev *dev, u16 uarn) >> return efa_com_dealloc_uar(&dev->edev, ¶ms); >> } >> >> +#define EFA_CHECK_USER_COMP(_dev, _comp_mask, _attr, _mask, _attr_str) \ >> + (_attr_str = (!(_dev)->dev_attr._attr || ((_comp_mask) & (_mask))) ? \ >> + NULL : #_attr) >> + >> +static int efa_user_comp_handshake(const struct ib_ucontext *ibucontext, >> + const struct efa_ibv_alloc_ucontext_cmd *cmd) >> +{ >> + struct efa_dev *dev = to_edev(ibucontext->device); >> + char *attr_str; >> + >> + if (EFA_CHECK_USER_COMP(dev, cmd->comp_mask, max_tx_batch, >> + EFA_ALLOC_UCONTEXT_CMD_COMP_TX_BATCH, attr_str)) >> + goto err; >> + >> + if (EFA_CHECK_USER_COMP(dev, cmd->comp_mask, min_sq_depth, >> + EFA_ALLOC_UCONTEXT_CMD_COMP_MIN_SQ_WR, >> + attr_str)) >> + goto err; > > But this patch should be first, the kernel should never return a > non-zero value unless these input bits are set But that's exactly what this patch does, it can only fail in case max_tx_batch/min_sq_depth is turned on by the device. Anyway, the order doesn't matter as long as the pciid patch is last.