On Sun, Dec 26, 2021 at 3:36 PM Gabriel L. Somlo <gsomlo@xxxxxxxxx> wrote: > On Sun, Dec 26, 2021 at 03:13:21PM +0200, Andy Shevchenko wrote: > > On Sun, Dec 26, 2021 at 1:45 PM Gabriel L. Somlo <gsomlo@xxxxxxxxx> wrote: > > > On Sat, Dec 25, 2021 at 06:43:22PM +0200, Andy Shevchenko wrote: > > > > On Wed, Dec 15, 2021 at 10:00 PM Gabriel Somlo <gsomlo@xxxxxxxxx> wrote: ... > > > > > +#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT > > > > > > > > Why under ifdeffery? > > > > > > Because I only want to do it on 64-bit capable architectures. > > > > > > The alternative would be to call > > > > > > dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); > > > > > > on *all* architectures, but ignore the returned error (-EIO, > > > presumably on architetures that only support 32-bit DMA). > > > > I don't understand why you are supposed to ignore errors and why you > > expect to get such. > > If I call `dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));` > on a machine where `CONFIG_ARCH_DMA_ADDR_T_64BIT` is *not* set, I > expect an error. The implicit default > (per Documentation/core-api/dma-api-howto.rst), is DMA_BIT_MASK(32). > I'm working under the impression that on machines with > CONFIG_ARCH_DMA_ADDR_T_64BIT I should increase that to DMA_BIT_MASK(64). > > So if I don't #ifdef it, that call will fail on machines supporting > only 32-bits. > > What am I missing? This thread: https://lkml.org/lkml/2021/6/7/398 ? -- With Best Regards, Andy Shevchenko