On Saturday 02 July 2011, Jonas Bonn wrote: > +void *or1k_dma_alloc_coherent(struct device *dev, size_t size, > + dma_addr_t *dma_handle, gfp_t flag) > +{ > + int order; > + unsigned long page, va; > + pgprot_t prot; > + struct vm_struct *area; > + > + /* Only allocate page size areas. */ > + size = PAGE_ALIGN(size); > + order = get_order(size); > + > + page = __get_free_pages(flag, order); > + if (!page) > + return NULL; > + > + /* Allocate some common virtual space to map the new pages. */ > + area = get_vm_area(size, VM_ALLOC); > + if (area == NULL) { > + free_pages(page, order); > + return NULL; > + } > + va = (unsigned long)area->addr; > + > + /* This gives us the real physical address of the first page. */ > + *dma_handle = __pa(page); > + > + prot = PAGE_KERNEL_NOCACHE; > + > + /* This isn't so much ioremap as just simply 'remap' */ > + if (ioremap_page_range(va, va + size, *dma_handle, prot)) { > + vfree(area->addr); > + return NULL; > + } > + > + return (void *)va; > +} This will result in having conflicting mappings, one with and another without caching, which a lot of CPU architectures don't like. Are you sure that you can handle this with or1k? I think at the very least you will need to flush the cache for the linear mapping, to avoid writing back dirty cache lines over the DMA buffer. You can save a little memory by using alloc_pages_exact instead of get_free_pages, which always gives you a power-of-two size. Also, isn't get_vm_area+ioremap_page_range the same as ioremap on or1k? In the case that ioremap_page_ranges fails, I think you have a memory leak, or worse, because areas is not backed by the pages at that moment. Arnd -- To unsubscribe from this list: send the line "unsubscribe linux-arch" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html