One step at a time, let's look at the TX path: On 17.08.2022 16:35:29, Dario Binacchi wrote: > +static netdev_tx_t bxcan_start_xmit(struct sk_buff *skb, > + struct net_device *ndev) > +{ > + struct bxcan_priv *priv = netdev_priv(ndev); > + struct can_frame *cf = (struct can_frame *)skb->data; > + struct bxcan_regs *regs = priv->regs; > + struct bxcan_mb *mb_regs; __iomem? > + unsigned int mb_id; > + u32 id, tsr; > + int i, j; > + > + if (can_dropped_invalid_skb(ndev, skb)) > + return NETDEV_TX_OK; > + > + tsr = readl(®s->tsr); > + mb_id = ffs((tsr & BXCAN_TSR_TME) >> BXCAN_TSR_TME_SHIFT); We want to send the CAN frames in the exact order they are pushed into the driver, so don't pick the first free mailbox you find. How a priorities for the TX mailboxes handled? Is the mailbox with the lowest number send first? Is there a priority field in the mailbox? If the mail with the lowest number is transmitted first, it's best to have a tx_head and tx_tail counter, e.g: struct bxcan_priv { ... unsigned int tx_head; unsigned int tx_tail; ... }; They both start with 0. The xmit function pushes the CAN frame into the "priv->tx_head % 3" mailbox. Before triggering the xmit in hardware the tx_head is incremented. In your TX complete ISR look at priv->tx_tail % 3 for completion, increment tx_tail, loop. > + if (mb_id == 0) > + return NETDEV_TX_BUSY; > + > + mb_id -= 1; > + mb_regs = ®s->tx_mb[mb_id]; > + > + if (cf->can_id & CAN_EFF_FLAG) > + id = BXCAN_TIxR_EXID(cf->can_id & CAN_EFF_MASK) | > + BXCAN_TIxR_IDE; > + else > + id = BXCAN_TIxR_STID(cf->can_id & CAN_SFF_MASK); > + > + if (cf->can_id & CAN_RTR_FLAG) > + id |= BXCAN_TIxR_RTR; > + > + bxcan_rmw(&mb_regs->dlc, BXCAN_TDTxR_DLC_MASK, > + BXCAN_TDTxR_DLC(cf->len)); > + priv->tx_dlc[mb_id] = cf->len; Please use can_put_echo_skb() for this. > + > + for (i = 0, j = 0; i < cf->len; i += 4, j++) > + writel(*(u32 *)(cf->data + i), &mb_regs->data[j]); > + > + /* Start transmission */ > + writel(id | BXCAN_TIxR_TXRQ, &mb_regs->id); > + /* Stop the queue if we've filled all mailbox entries */ > + if (!(readl(®s->tsr) & BXCAN_TSR_TME)) > + netif_stop_queue(ndev); This is racy. You have to stop the queue before triggering the transmission. Have a look at the mcp251xfd driver: | https://elixir.bootlin.com/linux/latest/source/drivers/net/can/spi/mcp251xfd/mcp251xfd-tx.c#L187 The check for NETDEV_TX_BUSY is a bit more complicated, too: | https://elixir.bootlin.com/linux/latest/source/drivers/net/can/spi/mcp251xfd/mcp251xfd-tx.c#L178 The mcp251xfd has a proper hardware FIFO ring buffer for TX, the bxcan probably doesn't. The get_tx_free() check is a bit different. Look at c_can_get_tx_free() in: | https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=28e86e9ab522e65b08545e5008d0f1ac5b19dad1 This patch is a good example for the relevant changes. > + > + return NETDEV_TX_OK; > +} [...] > +static irqreturn_t bxcan_tx_isr(int irq, void *dev_id) > +{ > + struct net_device *ndev = dev_id; > + struct bxcan_priv *priv = netdev_priv(ndev); > + struct bxcan_regs __iomem *regs = priv->regs; > + struct net_device_stats *stats = &ndev->stats; > + u32 tsr, rqcp_bit = BXCAN_TSR_RQCP0; > + int i; > + > + tsr = readl(®s->tsr); > + for (i = 0; i < BXCAN_TX_MB_NUM; rqcp_bit <<= 8, i++) { This might break the order of TX completion CAN frames. > + if (!(tsr & rqcp_bit)) > + continue; > + > + stats->tx_packets++; > + stats->tx_bytes += priv->tx_dlc[i]; Use can_get_echo_skb() here. > + } > + > + writel(tsr, ®s->tsr); > + > + if (netif_queue_stopped(ndev)) > + netif_wake_queue(ndev); With tx_head and tx_tail this should look like this: | https://elixir.bootlin.com/linux/v5.19/source/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c#L251 > + > + return IRQ_HANDLED; > +} Marc -- Pengutronix e.K. | Marc Kleine-Budde | Embedded Linux | https://www.pengutronix.de | Vertretung West/Dortmund | Phone: +49-231-2826-924 | Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
Attachment:
signature.asc
Description: PGP signature