On 11/9/18 12:57 PM, Nicolas Boichat wrote: > On Fri, Nov 9, 2018 at 6:43 PM Vlastimil Babka <vbabka@xxxxxxx> wrote: >> Also I'm probably missing the point of this all. In patch 3 you use >> __get_dma32_pages() thus __get_free_pages(__GFP_DMA32), which uses >> alloc_pages, thus the page allocator directly, and there's no slab >> caches involved. > > __get_dma32_pages fixes level 1 page allocations in the patch 3. > > This change fixes level 2 page allocations > (kmem_cache_zalloc(data->l2_tables, gfp | GFP_DMA)), by transparently > remapping GFP_DMA to an underlying ZONE_DMA32. > > The alternative would be to create a new SLAB_CACHE_DMA32 when > CONFIG_ZONE_DMA32 is defined, but then I'm concerned that the callers > would need to choose between the 2 (GFP_DMA or GFP_DMA32...), and also > need to use some ifdefs (but maybe that's not a valid concern?). > >> It makes little sense to involve slab for page table >> allocations anyway, as those tend to be aligned to a page size (or >> high-order page size). So what am I missing? > > Level 2 tables are ARM_V7S_TABLE_SIZE(2) => 1kb, so we'd waste 3kb if > we allocated a full page. Oh, I see. Well, I think indeed the most transparent would be to support SLAB_CACHE_DMA32. The callers of kmem_cache_zalloc() would then need not add anything special to gfp, as that's stored internally upon kmem_cache_create(). Of course SLAB_BUG_MASK would no longer have to treat __GFP_DMA32 as unexpected. It would be unexpected when passed to kmalloc() which doesn't have special dma32 caches, but for a cache explicitly created to allocate from ZONE_DMA32, I don't see why not. I'm somewhat surprised that there wouldn't be a need for this earlier, so maybe I'm still missing something. > Thanks, >