On Fri, Jul 27, 2018 at 09:23:30AM -0400, Tony Battersby wrote: > On 07/26/2018 08:07 PM, Matthew Wilcox wrote: > > If you're up for more major surgery, then I think we can put all the > > information currently stored in dma_page into struct page. Something > > like this: > > > > +++ b/include/linux/mm_types.h > > @@ -152,6 +152,12 @@ struct page { > > unsigned long hmm_data; > > unsigned long _zd_pad_1; /* uses mapping */ > > }; > > + struct { /* dma_pool pages */ > > + struct list_head dma_list; > > + unsigned short in_use; > > + unsigned short offset; > > + dma_addr_t dma; > > + }; > > > > /** @rcu_head: You can use this to free a page by RCU. */ > > struct rcu_head rcu_head; > > > > page_list -> dma_list > > vaddr goes away (page_to_virt() exists) > > dma -> dma > > in_use and offset shrink from 4 bytes to 2. > > > > Some 32-bit systems have a 64-bit dma_addr_t, and on those systems, > > this will be 8 + 2 + 2 + 8 = 20 bytes. On 64-bit systems, it'll be > > 16 + 2 + 2 + 4 bytes of padding + 8 = 32 bytes (we have 40 available). > > > > > offset at least needs more bits, since allocations can be multi-page. Ah, rats. That means we have to use the mapcount union too: +++ b/include/linux/mm_types.h @@ -152,6 +152,11 @@ struct page { unsigned long hmm_data; unsigned long _zd_pad_1; /* uses mapping */ }; + struct { /* dma_pool pages */ + struct list_head dma_list; + unsigned int dma_in_use; + dma_addr_t dma; + }; /** @rcu_head: You can use this to free a page by RCU. */ struct rcu_head rcu_head; @@ -174,6 +179,7 @@ struct page { unsigned int active; /* SLAB */ int units; /* SLOB */ + unsigned int dma_offset; /* dma_pool */ }; /* Usage count. *DO NOT USE DIRECTLY*. See page_ref.h */ > See the following from mpt3sas: > > cat /sys/devices/pci0000:80/0000:80:07.0/0000:85:00.0/pools > (manually cleaned up column alignment) > poolinfo - 0.1 > reply_post_free_array pool 1 21 192 1 > reply_free pool 1 1 41728 1 > reply pool 1 1 1335296 1 > sense pool 1 1 970272 1 > chain pool 373959 386048 128 12064 > reply_post_free pool 12 12 166528 12 > ^size^ Wow, that's a pretty weird way to use the dmapool. It'd be more efficient to just call dma_alloc_coherent() directly.