On 2/8/2022 12:19 AM, Ilpo Järvinen wrote:
On Thu, 13 Jan 2022, Ricardo Martinez wrote:
From: Haijun Liu <haijun.liu@xxxxxxxxxxxx>
Data Path Modem AP Interface (DPMAIF) HIF layer provides methods
for initialization, ISR, control and event handling of TX/RX flows.
...
+ spin_lock_irqsave(&txq->tx_lock, flags);
+ cur_idx = txq->drb_wr_idx;
+ drb_wr_idx_backup = cur_idx;
+
+ txq->drb_wr_idx += send_cnt;
+ if (txq->drb_wr_idx >= txq->drb_size_cnt)
+ txq->drb_wr_idx -= txq->drb_size_cnt;
+
+ t7xx_setup_msg_drb(dpmaif_ctrl, txq->index, cur_idx, skb->len, 0, skb->cb[TX_CB_NETIF_IDX]);
+ t7xx_record_drb_skb(dpmaif_ctrl, txq->index, cur_idx, skb, 1, 0, 0, 0, 0);
+ spin_unlock_irqrestore(&txq->tx_lock, flags);
+
+ cur_idx = t7xx_ring_buf_get_next_wr_idx(txq->drb_size_cnt, cur_idx);
+
+ for (wr_cnt = 0; wr_cnt < payload_cnt; wr_cnt++) {
+ if (!wr_cnt) {
+ data_len = skb_headlen(skb);
+ data_addr = skb->data;
+ is_frag = false;
+ } else {
+ skb_frag_t *frag = info->frags + wr_cnt - 1;
+
+ data_len = skb_frag_size(frag);
+ data_addr = skb_frag_address(frag);
+ is_frag = true;
+ }
+
+ if (wr_cnt == payload_cnt - 1)
+ is_last_one = true;
+
+ /* TX mapping */
+ bus_addr = dma_map_single(dpmaif_ctrl->dev, data_addr, data_len, DMA_TO_DEVICE);
+ if (dma_mapping_error(dpmaif_ctrl->dev, bus_addr)) {
+ dev_err(dpmaif_ctrl->dev, "DMA mapping fail\n");
+ atomic_set(&txq->tx_processing, 0);
+
+ spin_lock_irqsave(&txq->tx_lock, flags);
+ txq->drb_wr_idx = drb_wr_idx_backup;
+ spin_unlock_irqrestore(&txq->tx_lock, flags);
Hmm, can txq's drb_wr_idx get updated (or cleared) by something else
in between these critical sections?
drb_wr_idx cannot be modified inbetween, but it can be used to calculate
the number of DRBs available, which shouldn't be a problem.
The function is reserving the DRBs at the beginning, in the rare case of
error it will release them.
...
+ txq_id = t7xx_select_tx_queue(dpmaif_ctrl);
+ if (txq_id >= 0) {
t7xx_select_tx_queue used to do que_started check (in v2) but it
doesn't anymore so this if is always true these days. I'm left to
wonder though if it was ok to drop that que_started check?
The que_started check wasn't supposed to be dropped, I'll add it back.
...
+/* SKB control buffer indexed values */
+#define TX_CB_NETIF_IDX 0
+#define TX_CB_QTYPE 1
+#define TX_CB_DRB_CNT 2
The normal way of storing a struct to skb->cb area is:
struct t7xx_skb_cb {
u8 netif_idx;
u8 qtype;
u8 drb_cnt;
};
#define T7XX_SKB_CB(__skb) ((struct t7xx_skb_cb *)&((__skb)->cb[0]))
However, there's only a single txqt/qtype (TXQ_TYPE_DEFAULT) in the
patchset? And it seems to me that drb_cnt is a value that could be always
derived using t7xx_get_drb_cnt_per_skb() from the skb rather than
stored?
The next iteration will contain t7xx_tx_skb_cb and t7xx_rx_skb_cb
structures.
Also, q_number is going to be used instead of qtype.
Only one queue is used but I think we can keep this code generic as it
is straight forward (not like the drb_lack case), any thoughts?
+#define DRB_PD_DATA_LEN ((u32)GENMASK(31, 16))
Drop the cast?
The cast was added to avoid a compiler warning about truncated bits.
I'll move it to the place where it is required:
drb->header &= cpu_to_le32(~(u32)DRB_PD_DATA_LEN);
...