Hi, On Wed, Mar 7, 2018 at 9:19 PM, Doug Anderson <dianders@xxxxxxxxxxxx> wrote: >>> DMA is hard and i2c transfers > 32 bytes are rare. Do we really gain >>> a lot by transferring i2c commands over DMA compared to a FIFO? >>> Enough to justify the code complexity and the set of bugs that will >>> show up? I'm sure it will be a controversial assertion given that the >>> code's already written, but personally I'd be supportive of ripping >>> DMA mode out to simplify the driver. I'd be curious if anyone else >>> agrees. To me it seems like premature optimization. >> >> >> Yes, we have multiple clients (e.g. touch, NFC) using I2C for data transfers >> bigger than 32 bytes (some transfers are 100s of bytes). The fifo size is >> 32, so we can definitely avoid at least 1 interrupt when DMA mode is used >> with data size > 32. > > Does that 1-2 interrupts make any real difference, though? In theory > it really shouldn't affect the transfer rate. We should be able to > service the interrupt plenty fast and if we were concerned we would > tweak the watermark code a little bit. So I guess we're worried about > the extra CPU cycles (and power cost) to service those extra couple > interrupts? > > In theory when touch data is coming in or NFC data is coming in then > we're probably not in a super low power state to begin with. If it's > touch data we likely want to have the CPU boosted a bunch to respond > to the user quickly. If we've got 8 cores available all of which can > run at 1GHz or more a few interrupts won't kill us. NFC data is > probably not common enough that we need to optimize power/CPU > utilizatoin for that. > > > So while i can believe that you do save an interrupt or two, I still > am not convinced that those interrupts are worth a bunch of complex > code (and a whole second code path) to save. > > > ...also note that above you said that coming out of runtime suspend > can take several msec. That seems like it dwarfs any slight > difference in timing between a FIFO-based operation and DMA. One last note here (sorry for not thinking of this last night) is that I would also be interested in considering _only_ supporting the DMA path. Is it somehow intrinsically bad to use the DMA flow for a 1-byte transfer? Is there a bunch of extra overhead or power draw? Specifically my main point is that maintaining two separate flows (the FIFO flow vs the DMA flow) adds complexity, code size, and bugs. If there's a really good reason to maintain both flows then fine, but we should really consider if this is something that's really giving us value before we agree to it. -Doug