Hi Ard, On Tue, 18 Jan 2022 at 12:22, Ard Biesheuvel <ardb@xxxxxxxxxx> wrote: > > Both versions of the CPSW driver declare a CPSW_HEADROOM_NA macro that > takes NET_IP_ALIGN into account, but fail to use it appropriately when > storing incoming packets in memory. This results in the IPv4 source and > destination addresses to appear misaligned in memory, which causes > aligment faults that need to be fixed up in software. > > So let's switch from CPSW_HEADROOM to CPSW_HEADROOM_NA where needed. > This gets rid of any alignment faults on the RX path on a Beaglebone > White. > > Cc: Grygorii Strashko <grygorii.strashko@xxxxxx> > Cc: Ilias Apalodimas <ilias.apalodimas@xxxxxxxxxx> > Signed-off-by: Ard Biesheuvel <ardb@xxxxxxxxxx> > --- > drivers/net/ethernet/ti/cpsw.c | 6 +++--- > drivers/net/ethernet/ti/cpsw_new.c | 6 +++--- > drivers/net/ethernet/ti/cpsw_priv.c | 2 +- > 3 files changed, 7 insertions(+), 7 deletions(-) > > diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c > index 33142d505fc8..03575c017500 100644 > --- a/drivers/net/ethernet/ti/cpsw.c > +++ b/drivers/net/ethernet/ti/cpsw.c > @@ -349,7 +349,7 @@ static void cpsw_rx_handler(void *token, int len, int status) > struct cpsw_common *cpsw = ndev_to_cpsw(xmeta->ndev); > int pkt_size = cpsw->rx_packet_max; > int ret = 0, port, ch = xmeta->ch; > - int headroom = CPSW_HEADROOM; > + int headroom = CPSW_HEADROOM_NA; > struct net_device *ndev = xmeta->ndev; > struct cpsw_priv *priv; > struct page_pool *pool; > @@ -392,7 +392,7 @@ static void cpsw_rx_handler(void *token, int len, int status) > } > > if (priv->xdp_prog) { > - int headroom = CPSW_HEADROOM, size = len; > + int size = len; > > xdp_init_buff(&xdp, PAGE_SIZE, &priv->xdp_rxq[ch]); > if (status & CPDMA_RX_VLAN_ENCAP) { > @@ -442,7 +442,7 @@ static void cpsw_rx_handler(void *token, int len, int status) > xmeta->ndev = ndev; > xmeta->ch = ch; > > - dma = page_pool_get_dma_addr(new_page) + CPSW_HEADROOM; > + dma = page_pool_get_dma_addr(new_page) + CPSW_HEADROOM_NA; > ret = cpdma_chan_submit_mapped(cpsw->rxv[ch].ch, new_page, dma, > pkt_size, 0); > if (ret < 0) { > diff --git a/drivers/net/ethernet/ti/cpsw_new.c b/drivers/net/ethernet/ti/cpsw_new.c > index 279e261e4720..bd4b1528cf99 100644 > --- a/drivers/net/ethernet/ti/cpsw_new.c > +++ b/drivers/net/ethernet/ti/cpsw_new.c > @@ -283,7 +283,7 @@ static void cpsw_rx_handler(void *token, int len, int status) > { > struct page *new_page, *page = token; > void *pa = page_address(page); > - int headroom = CPSW_HEADROOM; > + int headroom = CPSW_HEADROOM_NA; > struct cpsw_meta_xdp *xmeta; > struct cpsw_common *cpsw; > struct net_device *ndev; > @@ -336,7 +336,7 @@ static void cpsw_rx_handler(void *token, int len, int status) > } > > if (priv->xdp_prog) { > - int headroom = CPSW_HEADROOM, size = len; > + int size = len; > > xdp_init_buff(&xdp, PAGE_SIZE, &priv->xdp_rxq[ch]); > if (status & CPDMA_RX_VLAN_ENCAP) { > @@ -386,7 +386,7 @@ static void cpsw_rx_handler(void *token, int len, int status) > xmeta->ndev = ndev; > xmeta->ch = ch; > > - dma = page_pool_get_dma_addr(new_page) + CPSW_HEADROOM; > + dma = page_pool_get_dma_addr(new_page) + CPSW_HEADROOM_NA; > ret = cpdma_chan_submit_mapped(cpsw->rxv[ch].ch, new_page, dma, > pkt_size, 0); > if (ret < 0) { > diff --git a/drivers/net/ethernet/ti/cpsw_priv.c b/drivers/net/ethernet/ti/cpsw_priv.c > index 3537502e5e8b..ba220593e6db 100644 > --- a/drivers/net/ethernet/ti/cpsw_priv.c > +++ b/drivers/net/ethernet/ti/cpsw_priv.c > @@ -1122,7 +1122,7 @@ int cpsw_fill_rx_channels(struct cpsw_priv *priv) > xmeta->ndev = priv->ndev; > xmeta->ch = ch; > > - dma = page_pool_get_dma_addr(page) + CPSW_HEADROOM; > + dma = page_pool_get_dma_addr(page) + CPSW_HEADROOM_NA; > ret = cpdma_chan_idle_submit_mapped(cpsw->rxv[ch].ch, > page, dma, > cpsw->rx_packet_max, > -- > 2.30.2 > This looks good to me. Grygorii I assume cpdma is ok with potential unaligned accesses? Thanks /Ilias