> -----Original Message----- > From: dmaengine-owner@xxxxxxxxxxxxxxx > [mailto:dmaengine-owner@xxxxxxxxxxxxxxx] On Behalf Of Wen He > Sent: 2018年6月11日 16:15 > To: Vinod <vkoul@xxxxxxxxxx> > Cc: dmaengine@xxxxxxxxxxxxxxx; robh+dt@xxxxxxxxxx; > devicetree@xxxxxxxxxxxxxxx; Leo Li <leoyang.li@xxxxxxx>; Jiafei Pan > <jiafei.pan@xxxxxxx>; Jiaheng Fan <jiaheng.fan@xxxxxxx> > Subject: RE: [v5 2/6] dmaengine: fsl-qdma: Add qDMA controller driver for > Layerscape SoCs > > > > > -----Original Message----- > > From: dmaengine-owner@xxxxxxxxxxxxxxx > > [mailto:dmaengine-owner@xxxxxxxxxxxxxxx] On Behalf Of Vinod > > Sent: 2018年6月6日 0:29 > > To: Wen He <wen.he_1@xxxxxxx> > > Cc: dmaengine@xxxxxxxxxxxxxxx; robh+dt@xxxxxxxxxx; > > devicetree@xxxxxxxxxxxxxxx; Leo Li <leoyang.li@xxxxxxx>; Jiafei Pan > > <jiafei.pan@xxxxxxx>; Jiaheng Fan <jiaheng.fan@xxxxxxx> > > Subject: Re: [v5 2/6] dmaengine: fsl-qdma: Add qDMA controller driver > > for Layerscape SoCs > > > > On 31-05-18, 01:58, Wen He wrote: > > > > > > > > > +static void fsl_qdma_issue_pending(struct dma_chan *chan) { > > > > > > > > > + struct fsl_qdma_chan *fsl_chan = > to_fsl_qdma_chan(chan); > > > > > > > > > + struct fsl_qdma_queue *fsl_queue = fsl_chan->queue; > > > > > > > > > + unsigned long flags; > > > > > > > > > + > > > > > > > > > + spin_lock_irqsave(&fsl_queue->queue_lock, flags); > > > > > > > > > + spin_lock(&fsl_chan->vchan.lock); > > > > > > > > > + if (vchan_issue_pending(&fsl_chan->vchan)) > > > > > > > > > + fsl_qdma_enqueue_desc(fsl_chan); > > > > > > > > > + spin_unlock(&fsl_chan->vchan.lock); > > > > > > > > > + spin_unlock_irqrestore(&fsl_queue->queue_lock, flags); > > > > > > > > > > > > > > > > why do we need two locks, and since you are doing vchan > > > > > > > > why should you add your own lock on top > > > > > > > > > > > > > > > > > > > > > > Yes, we need two locks. > > > > > > > As you know, the QDMA support multiple virtualized blocks > > > > > > > for multi-core > > > > > > support. > > > > > > > so we need to make sure that muliti-core access issues. > > > > > > > > > > > > but why cant you use vchan lock for all? > > > > > > > > > > > > > > > > We can't only use vchan lock for all. otherwise enqueue action > > > > > will be > > > > interrupted. > > > > > > > > I think it is possible to use only vchan lock > > > > > > I tried that if I use only vchan lock then qdma will be can't work. > > > Do you have a other good idea? > > > > can you explain the scenario... > > > All right. > When DMA client start transmit, will be call function > dma_async_issue_pending(), the dma_async_issue_pending() call the hook > pointer device_issue_pending. The function fsl_qdma_issue_pending() is used > to fill the device_issue_pending field. > > The function fsl_qdma_issue_pending() call the function > fsl_qdma_enqueue_desc(). > > The function fsl_qdma_enqueue_desc() includes 3 steps. > 1. peek at the next descriptor to be processed. > 2. if next descriptor exist, then insert to linked list(used to get it when this > descriptor transfer complete). > 3. if next descriptor exist, then writing to qdma. > > In above steps, we will use struct fsl_qdma_chan and struct fsl_qdma_queue, > so we need two locks to protected it. > > Best Regards, > Wen > Hi Vinod, Do you have any other comments besides we discussed? Can I submit next version patch? Looking forward to your reply. Best Regards, Wen > > -- > > ~Vinod > > -- > > To unsubscribe from this list: send the line "unsubscribe dmaengine" > > in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo > > info at > https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fvger. > > > kernel.org%2Fmajordomo-info.html&data=02%7C01%7Cwen.he_1%40nxp.co > > > m%7C0f7cc533d6bf4de067d008d5cb01742c%7C686ea1d3bc2b4c6fa92cd99 > > > c5c301635%7C0%7C0%7C636638129466650321&sdata=zrc0Pf%2Bq0pqixqm > > LC5jhjLvvV5MiSUM68XtanJxgMbQ%3D&reserved=0 > ㈤旃??迆??瑬+-遍荻w疄藳笔鈓鏵i猷妛豝n噐■?侂h櫒璀?Ⅷ瓽珴 > 閔?(殠娸"濟?m?飦赇z罐枈帼f"穐殘坢 ��.n��������+%������w��{.n��������)�)��jg��������ݢj����G�������j:+v���w�m������w�������h�����٥