2010/9/1 Oliver Neukum <oneukum@xxxxxxx>: > Am Mittwoch, 1. September 2010, 11:41:06 schrieb Ming Lei: > >> > + if (urb->bounce_buffer) { >> > + if (dir == DMA_FROM_DEVICE) >> > + memcpy(urb->transfer_buffer, >> > + urb->bounce_buffer, >> > + urb->transfer_buffer_length); >> > + kfree(urb->bounce_buffer); >> >> dma_unmap_single is needed for bounce_buffer. > > Good catch. > >> > + } >> > + } else if (urb->transfer_flags & URB_MAP_LOCAL) >> > hcd_free_coherent(urb->dev->bus, >> > &urb->transfer_dma, >> > &urb->transfer_buffer, >> > @@ -1373,16 +1380,39 @@ static int map_urb_for_dma(struct usb_hcd *hcd, struct urb *urb, >> > else >> > urb->transfer_flags |= URB_DMA_MAP_PAGE; >> > } else { >> > - urb->transfer_dma = dma_map_single( >> > - hcd->self.controller, >> > - urb->transfer_buffer, >> > - urb->transfer_buffer_length, >> > - dir); >> > - if (dma_mapping_error(hcd->self.controller, >> > - urb->transfer_dma)) >> > - ret = -EAGAIN; >> > - else >> > - urb->transfer_flags |= URB_DMA_MAP_SINGLE; >> > + void *buffer = urb->transfer_buffer; >> > + >> > + if (IS_ALIGNED((unsigned long)buffer, >> > + 1 << hcd->driver->dma_align_shift)) >> > + urb->bounce_buffer = NULL; >> >> Suppose hcd->driver->dma_align_shift is zero and HC is byte aligned DMA >> enabled, this means DMA is doable between byte aligned memory and HC, >> but this does __not__ mean it is safe to do dma mapping or unmapping between >> CPU and byte aligned memory, which may cause sync issues between memory >> and CPU cache. > > OK, so what tells us on which alignment a CPU can map? CPU cache line size, all buffer allocated by kmalloc may respect the constraint. -- Lei Ming -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html