On 1/17/22 10:58 PM, Jason Gunthorpe wrote:
On Mon, Jan 17, 2022 at 04:48:20PM +0800, Cheng Xu wrote:
Add ERDMA driver main header file, defining internal used data structures
and operations. The defined data structures includes *cmdq*, which is used
as the communication channel between ERDMA driver and hardware.
Signed-off-by: Cheng Xu <chengyou@xxxxxxxxxxxxxxxxx>
drivers/infiniband/hw/erdma/erdma.h | 392 ++++++++++++++++++++++++++++
1 file changed, 392 insertions(+)
create mode 100644 drivers/infiniband/hw/erdma/erdma.h
diff --git a/drivers/infiniband/hw/erdma/erdma.h b/drivers/infiniband/hw/erdma/erdma.h
new file mode 100644
index 000000000000..ae9ec98e99d0
+++ b/drivers/infiniband/hw/erdma/erdma.h
@@ -0,0 +1,392 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
+
+/* Authors: Cheng Xu <chengyou@xxxxxxxxxxxxxxxxx> */
+/* Kai Shen <kaishen@xxxxxxxxxxxxxxxxx> */
+/* Copyright (c) 2020-2022, Alibaba Group. */
+
+#ifndef __ERDMA_H__
+#define __ERDMA_H__
+
+#include <linux/bitfield.h>
+#include <linux/netdevice.h>
+#include <linux/xarray.h>
+#include <rdma/ib_verbs.h>
+
+#include "erdma_hw.h"
+
+#define DRV_MODULE_NAME "erdma"
+
+struct erdma_eq {
+ void *qbuf;
+ dma_addr_t qbuf_dma_addr;
+
+ u32 depth;
+ u64 __iomem *db_addr;
+
+ spinlock_t lock;
+
+ u16 ci;
+ u16 owner;
+
+ atomic64_t event_num;
+ atomic64_t notify_num;
+
+ void *db_info;
+};
+
+struct erdma_cmdq_sq {
+ void *qbuf;
+ dma_addr_t qbuf_dma_addr;
+
+ spinlock_t lock;
+ u64 __iomem *db_addr;
+
+ u16 ci;
+ u16 pi;
+
+ u16 depth;
+ u16 wqebb_cnt;
+
+ void *db_info;
Why are there so many void *'s in these structs?
qbuf/db_addr/db_info are resources used by ERDMA driver & device.
'void *dev' is used for convenient access the pointer structure, and
is unnecessary indeed, I will check for this and remove if unnecessary.
+struct erdma_cmdq_cq {
+ void *db_info;
+struct erdma_cmdq {
+ void *dev;
+struct erdma_irq_info {
+ void *data;
+struct erdma_eq_cb {
+ void *dev;
+struct erdma_dev {
+ void *dmadev;
+ void *drvdata;
+ /* reference to drvdata->cmdq */
+ struct erdma_cmdq *cmdq;
+
+ void (*release_handler)(void *drvdata);
Why??
After reading all your review suggestions and questions, I think this
question, and other questions in PCI probe/remove, module_exit
functions, have a relationship with the ibdev generation mechanism of
this version's driver. I realize it is not proper, and I think you
already get the reason according your responses.
In the v1 driver, we create IB deivce in PCI probe function, and
register IB device in net notifiers. In the v2 driver, we introduce the
separated device structures (struct erdma_pci_drvdata/struct erdma_dev)
to support dynamic IB device creation, because we think the user should
control the lifecycle of IB device by 'rdma link' command, by mistake.
Back to the 'release_handler', it is used for cleanup the
erdma_pci_drvdata structure after IB device dealloc. We will fix all our
mechanism.
Thanks,
Cheng Xu
+ int cc_method;
+ int disable_dwqe;
+ int dwqe_pages;
+ int dwqe_entries;
Use proper types, bool unsinged int, etc.
Will fix.
+#define ERDMA_CMDQ_BUILD_REQ_HDR(hdr, mod, op)\
+do { \
+ *(u64 *)(hdr) = FIELD_PREP(ERDMA_CMD_HDR_SUB_MOD_MASK, mod);\
+ *(u64 *)(hdr) |= FIELD_PREP(ERDMA_CMD_HDR_OPCODE_MASK, op);\
+} while (0)
Would prefer an inline wrappered in a macro
Will fix.
+int erdma_post_cmd_wait(struct erdma_cmdq *cmdq, u64 *req, u32 req_size, u64 *resp0, u64 *resp1);
+void erdma_cmdq_completion_handler(struct erdma_cmdq *cmdq);
+
+int erdma_ceqs_init(struct erdma_pci_drvdata *drvdata);
+void erdma_ceqs_uninit(struct erdma_pci_drvdata *drvdata);
+
+int erdma_aeq_init(struct erdma_pci_drvdata *drvdata);
+void erdma_aeq_destroy(struct erdma_pci_drvdata *drvdata);
+
+void erdma_aeq_event_handler(struct erdma_pci_drvdata *drvdata);
+void erdma_ceq_completion_handler(struct erdma_eq_cb *ceq_cb);
Don't use the word 'drvdata' in a driver if you can avoid it. It
should only be used to describe the pointer stored in the struct
device, and ib devices don't work that way.
Jason
OK, I get it, and will remove them in next version.
Thanks,
Cheng Xu