On weakly ordered systems writes to the RX or TX descriptors may be reordered with the write to the DMA control register that enables DMA. If this happens then the device may see descriptors in an intermediate & invalid state, leading to incorrect behaviour. Add barriers to ensure that DMA is enabled only after all writes to the descriptors. Signed-off-by: Paul Burton <paul.burton@xxxxxxxx> Cc: David S. Miller <davem@xxxxxxxxxxxxx> Cc: linux-mips@xxxxxxxxxxxxxx Cc: netdev@xxxxxxxxxxxxxxx --- Changes in v5: - New patch. Changes in v4: None Changes in v3: None Changes in v2: None drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c index 4354842b9b7e..8e3ad7dcef0b 100644 --- a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c +++ b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c @@ -1260,6 +1260,9 @@ static void pch_gbe_tx_queue(struct pch_gbe_adapter *adapter, tx_desc->tx_frame_ctrl = (frame_ctrl); tx_desc->gbec_status = (DSC_INIT16); + /* Ensure writes to descriptors complete before DMA begins */ + mmiowb(); + if (unlikely(++ring_num == tx_ring->count)) ring_num = 0; @@ -1961,6 +1964,9 @@ int pch_gbe_up(struct pch_gbe_adapter *adapter) pch_gbe_alloc_rx_buffers(adapter, rx_ring, rx_ring->count); adapter->tx_queue_len = netdev->tx_queue_len; + /* Ensure writes to descriptors complete before DMA begins */ + mmiowb(); + pch_gbe_enable_dma_tx(&adapter->hw); pch_gbe_enable_dma_rx(&adapter->hw); pch_gbe_enable_mac_rx(&adapter->hw); -- 2.16.1