On Fri, Mar 11, 2016 at 11:26:31AM +0100, Boris Brezillon wrote: > On Fri, 11 Mar 2016 15:36:07 +0530 > Vinod Koul <vinod.koul@xxxxxxxxx> wrote: > > > On Fri, Mar 11, 2016 at 10:40:55AM +0100, Boris Brezillon wrote: > > > On Fri, 11 Mar 2016 11:54:52 +0530 > > > Vinod Koul <vinod.koul@xxxxxxxxx> wrote: > > > > > > > On Wed, Mar 09, 2016 at 11:14:34AM +0100, Boris Brezillon wrote: > > > > > > > > > > + * struct sun4i_dma_chan_config - DMA channel config > > > > > > > > > > + * > > > > > > > > > > + * @para: contains information about block size and time before checking > > > > > > > > > > + * DRQ line. This is device specific and only applicable to dedicated > > > > > > > > > > + * DMA channels > > > > > > > > > > > > > > > > > > What information, can you elobrate.. And why can't you use existing > > > > > > > > > dma_slave_config for this? > > > > > > > > > > > > > > > > Block size is related to the device FIFO size. I guess it allows the > > > > > > > > DMA channel to launch a transfer of X bytes without having to check the > > > > > > > > DRQ line (the line telling the DMA engine it can transfer more data > > > > > > > > to/from the device). The wait cycles information is apparently related > > > > > > > > to the number of clks the engine should wait before polling/checking > > > > > > > > the DRQ line status between each block transfer. I'm not sure what it > > > > > > > > saves to put WAIT_CYCLES() to something != 1, but in their BSP, > > > > > > > > Allwinner tweak that depending on the device. > > > > > > > > > > > > we already have block size aka src/dst_maxburst, why not use that one. > > > > > > > > > > Okay, but then remains the question "how should we choose the real burst > > > > > size?". The block size described in Allwinner datasheet is not the > > > > > number of words you will transmit without being preempted by other > > > > > master -> slave requests, it's the number of bytes that can be > > > > > transmitted without checking the DRQ line. > > > > > IOW, block_size = burst_size * X > > > > > > > > Thats fine, API expects words for this and also a width value. Client shoudl > > > > pass both and for programming you should use bytes converted from words and > > > > width. > > > > > > > > > > Not sure I get what you mean. Are you suggesting to add new fields to > > > the dma_slave_config struct to describe this block concept, or should > > > > No > > > > > we pass it through ->xxx_burstsize, and try to guess the real burstsize? > > > > Pass the real burstsize in words > > > > > In the latter case, you still haven't answered my question: how should > > > we choose the burstsize? > > > > From word value convert to bytes and program HW > > > > burst(in bytes) = burst (in words ) * buswidth; > > > > > Except, as already explained, the blocksize and burstsize concepts are > not exactly the same, and the sunxi engine expect both to be defined. > So let's take a real example to illustrate my question: > > For the NAND use case, here is my DMA channel setup: > > buswidth (or wordsize) = 4 bytes > burstsize = 4 words (32 bytes) > blocksize = 128 bytes > > Here, you can see that blocksize = 4 * burstsize, and again, burstsize > and blocksize are not encoding the same thing. So, assuming we use > ->src/dst_burstsize to encode the blocksize in our case, how should we > deduce the real burstsize (which still needs to be configured in the > engine). Oh, i was somehow under the impression they are same! Then we can't use blocksize here, pls pass burst and width properly. How is block size calculated? -- ~Vinod -- To unsubscribe from this list: send the line "unsubscribe dmaengine" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html