On Mon, 6 Dec 2021, Ricardo Martinez wrote: > From: Haijun Liu <haijun.liu@xxxxxxxxxxxx> > > Cross Layer DMA (CLDMA) Hardware interface (HIF) enables the control > path of Host-Modem data transfers. CLDMA HIF layer provides a common > interface to the Port Layer. > > CLDMA manages 8 independent RX/TX physical channels with data flow > control in HW queues. CLDMA uses ring buffers of General Packet > Descriptors (GPD) for TX/RX. GPDs can represent multiple or single > data buffers (DB). > > CLDMA HIF initializes GPD rings, registers ISR handlers for CLDMA > interrupts, and initializes CLDMA HW registers. > > CLDMA TX flow: > 1. Port Layer write > 2. Get DB address > 3. Configure GPD > 4. Triggering processing via HW register write > > CLDMA RX flow: > 1. CLDMA HW sends a RX "done" to host > 2. Driver starts thread to safely read GPD > 3. DB is sent to Port layer > 4. Create a new buffer for GPD ring > > Signed-off-by: Haijun Liu <haijun.liu@xxxxxxxxxxxx> > Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@xxxxxxxxx> > Co-developed-by: Ricardo Martinez <ricardo.martinez@xxxxxxxxxxxxxxx> > Signed-off-by: Ricardo Martinez <ricardo.martinez@xxxxxxxxxxxxxxx> > +static struct cldma_request *t7xx_cldma_ring_step_forward(struct cldma_ring *ring, > + struct cldma_request *req) > +{ > + if (req->entry.next == &ring->gpd_ring) > + return list_first_entry(&ring->gpd_ring, struct cldma_request, entry); > + > + return list_next_entry(req, entry); > +} > + > +static struct cldma_request *t7xx_cldma_ring_step_backward(struct cldma_ring *ring, > + struct cldma_request *req) > +{ > + if (req->entry.prev == &ring->gpd_ring) > + return list_last_entry(&ring->gpd_ring, struct cldma_request, entry); > + > + return list_prev_entry(req, entry); > +} Wouldn't these two seems generic enough to warrant adding something like list_next/prev_entry_circular(...) to list.h? > +static int t7xx_cldma_alloc_and_map_skb(struct cldma_ctrl *md_ctrl, struct cldma_request *req, > + size_t size) > +{ > + req->skb = __dev_alloc_skb(size, GFP_KERNEL); > + if (!req->skb) > + return -ENOMEM; > + > + req->mapped_buff = dma_map_single(md_ctrl->dev, req->skb->data, > + t7xx_skb_data_size(req->skb), DMA_FROM_DEVICE); t7xx_skb_data_size() is not defined by this patch but only in a later patch in the series. Also, I'd prefer its name to be changed to e.g. t7xx_skb_data_area_size() given what it calculates. IHMO, "data size" refers to actual frame/packet/payload and does not include the reserves/*rooms around it so the name is bit misleading as is. -- i.