On 2023/6/16 23:01, Alexander Duyck wrote: ... >>> Actually that would be a really good direction for this patch set to >>> look at going into. Rather than having us always allocate a "page" it >>> would make sense for most drivers to allocate a 4K fragment or the >>> like in the case that the base page size is larger than 4K. That might >>> be a good use case to justify doing away with the standard page pool >>> page and look at making them all fragmented. >> >> I am not sure if I understand the above, isn't the frag API able to >> support allocating a 4K fragment when base page size is larger than >> 4K before or after this patch? what more do we need to do? > > I'm not talking about the frag API. I am talking about the > non-fragmented case. Right now standard page_pool will allocate an > order 0 page. So if a driver is using just pages expecting 4K pages > that isn't true on these ARM or PowerPC systems where the page size is > larger than 4K. > > For a bit of historical reference on igb/ixgbe they had a known issue > where they would potentially run a system out of memory when page size > was larger than 4K. I had originally implemented things with just the > refcounting hack and at the time it worked great on systems with 4K > pages. However on a PowerPC it would trigger OOM errors because they > could run with 64K pages. To fix that I started adding all the > PAGE_SIZE checks in the driver and moved over to a striping model for > those that would free the page when it reached the end in order to > force it to free the page and make better use of the available memory. Isn't the page_pool_alloc() or page_pool_alloc_frag() API also solve the above problem? I think what you really want is another layer of subdividing support in the driver on top of the above, right?