On Thu, Feb 27, 2020 at 9:48 AM Catalin Marinas <catalin.marinas@xxxxxxx> wrote: > > On Thu, Feb 27, 2020 at 10:56:12AM +0100, Vlastimil Babka wrote: > > On 2/22/20 12:42 AM, Jonathan Richardson wrote: > > > As of the 5.5 kernel I see boot errors in cma. It reserves 1G and then can't > > > activate an area. I added some prints. It's trying to activate the DMA > > > zone. This causes a driver to fail allocating a dma pool later on. The > > > coherent pool is the default 256MB. If I reduce cma from 1G to 512M > > > then it only tries activates DMA32 zone. I assume there was not enough cma > > > memory for DMA zone? Are there any configuration changes required due > > > to the DMA_ZONE and DMA_ZONE32 changes? I've attached my boot log. > > > > I think this question is better for the ARM guys. CC'd > > With commit 1a8e1cef7603 ("arm64: use both ZONE_DMA and ZONE_DMA32"), we > limited ZONE_DMA to 1G to accommodate the RPi4 requirements. ZONE_DMA32 > spans to the end of the 32-bit space. So with a CMA region that goes > across the 1st GB, you'd hit this problem. > > The dma_contiguous_reserve() call in arm64 uses ZONE_DMA32 as the upper > limit under the assumption that you don't need CMA in ZONE_DMA. But this > one doesn't have a lower limit. > > What platform is this and how to you request the CMA size (cmdline)? This is stingray (arch/arm64/boot/dts/broadcom/stingray/bcm958742k.dts). cma is specified on cmdline as "cma=1G". The full boot log was in the attachment. Do > you use a fixed base? Also, do you want the CMA in ZONE_DMA or > ZONE_DMA32? I'm not sure which zone we want it in. I'm assuming ZONE_DMA32. Before that change there was only one zone. If I disable ZONE_DMA I don't see the error, though I haven't tested anything. I don't really understand why ZONE_DMA is enabled by default if it's a requirement for Rpi only. I'm assuming it should work as is with both zones enabled but not clear how cma spans the two zones. Was there a reason for having both zones enabled as the default? > > > > [ 0.000000] cma: Reserved 1024 MiB at 0x00000000a0000000 > > > ... > > > [ 0.390448] Activating cma name: reserved, zone name: DMA > > > [ 0.396564] pfn = 0xa0000 > > > [ 0.399522] cma->count = 262144 > > > [ 0.406244] pfn failed on = c0000 > > > [ 0.410002] cma: CMA area reserved could not be activated > > > > > > static int __init cma_activate_area(struct cma *cma) > > > { > > > ... > > > printk("Activating cma name: %s, zone name: %s\n", cma->name, zone->name); > > > printk("pfn = 0x%lx\n", pfn); > > > printk("cma->count = %lu\n", cma->count); > > > > > > do { > > > unsigned j; > > > > > > base_pfn = pfn; > > > for (j = pageblock_nr_pages; j; --j, pfn++) { > > > WARN_ON_ONCE(!pfn_valid(pfn)); > > > /* > > > * alloc_contig_range requires the pfn range > > > * specified to be in the same zone. Make this > > > * simple by forcing the entire CMA resv range > > > * to be in the same zone. > > > */ > > > if (page_zone(pfn_to_page(pfn)) != zone) { > > > printk("pfn failed on = 0x%lx\n", pfn); > > > goto not_in_zone; > > So I guess it's this test that fails as the CMA now spans ZONE_DMA and > ZONE_DMA32. Yes it fails here.