There are bunch of codes in driver like if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32)) Actually it is wrong because if dma_set_mask_and_coherent(64) fails, dma_set_mask_and_coherent(32) will fail for the same reason. And dma_set_mask_and_coherent(64) never returns failure. According to the definition of dma_set_mask(), it indicates the width of address that device DMA can access. If it can access 64-bit address, it must access 32-bit address inherently. So only need set biggest address width. See below code fragment: dma_set_mask(mask) { mask = (dma_addr_t)mask; if (!dev->dma_mask || !dma_supported(dev, mask)) return -EIO; arch_dma_set_mask(dev, mask); *dev->dma_mask = mask; return 0; } dma_supported() will call dma_direct_supported or iommux's dma_supported call back function. int dma_direct_supported(struct device *dev, u64 mask) { u64 min_mask = (max_pfn - 1) << PAGE_SHIFT; /* * Because 32-bit DMA masks are so common we expect every architecture * to be able to satisfy them - either by not supporting more physical * memory, or by providing a ZONE_DMA32. If neither is the case, the * architecture needs to use an IOMMU instead of the direct mapping. */ if (mask >= DMA_BIT_MASK(32)) return 1; ... } The iommux's dma_supported() actually means iommu requires devices's minimized dma capability. An example: static int sba_dma_supported( struct device *dev, u64 mask)() { ... * check if mask is >= than the current max IO Virt Address * The max IO Virt address will *always* < 30 bits. */ return((int)(mask >= (ioc->ibase - 1 + (ioc->pdir_size / sizeof(u64) * IOVP_SIZE) ))); ... } 1 means supported. 0 means unsupported. Correct document to make it more clear and provide correct sample code. Signed-off-by: Frank Li <Frank.Li@xxxxxxx> --- Notes: Change from v1 to v2: - fixed typo, review by Randy Dunlap Documentation/core-api/dma-api-howto.rst | 24 ++++++++++++++++++++++-- 1 file changed, 22 insertions(+), 2 deletions(-) diff --git a/Documentation/core-api/dma-api-howto.rst b/Documentation/core-api/dma-api-howto.rst index e8a55f9d61dbc..5f6a7d86b6bc2 100644 --- a/Documentation/core-api/dma-api-howto.rst +++ b/Documentation/core-api/dma-api-howto.rst @@ -203,13 +203,33 @@ setting the DMA mask fails. In this manner, if a user of your driver reports that performance is bad or that the device is not even detected, you can ask them for the kernel messages to find out exactly why. -The standard 64-bit addressing device would do something like this:: +The 24-bit addressing device would do something like this:: - if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) { + if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(24))) { dev_warn(dev, "mydev: No suitable DMA available\n"); goto ignore_this_device; } +The standard 64-bit addressing device would do something like this:: + + dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)) + +dma_set_mask_and_coherent() never return fail when DMA_BIT_MASK(64). Typical +error code like:: + + /* Wrong code */ + if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) + dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32)) + +dma_set_mask_and_coherent() will never return failure when bigger then 32. +So typical code like:: + + /* Recommended code */ + if (support_64bit) + dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)); + else + dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32)); + If the device only supports 32-bit addressing for descriptors in the coherent allocations, but supports full 64-bits for streaming mappings it would look like this:: -- 2.34.1