On Thu, May 03, 2018 at 10:03:10AM -0500, Christopher Lameter wrote: > On Wed, 2 May 2018, Matthew Wilcox wrote: > > > > > Option 2: > > > > + union { > > > > + unsigned long counters; > > > > + struct { > > > > + unsigned inuse:16; > > > > + unsigned objects:15; > > > > + unsigned frozen:1; > > > > + }; > > > > + }; > > > > > > > > Pro: Expresses exactly what we do. > > > > Con: Back to five levels of indentation in struct page > > I like that better. Improves readability of the code using struct page. I > think that is more important than the actual definition of struct page. OK. Do you want the conversion of slub to using slub_freelist and slub_list as part of this patch series as well, then? The end result looks like this, btw: struct { /* slub */ union { struct list_head slub_list; struct { struct page *next; /* Next partial */ #ifdef CONFIG_64BIT int pages; /* Nr of pages left */ int pobjects; /* Apprx # of objects */ #else short int pages; short int pobjects; #endif }; }; struct kmem_cache *slub_cache; /* shared with slab */ /* Double-word boundary */ void *slub_freelist; /* shared with slab */ union { unsigned long counters; struct { unsigned inuse:16; unsigned objects:15; unsigned frozen:1; }; }; }; Oh, and what do you want to do about cache_from_obj() in mm/slab.h? That relies on having slab_cache be in the same location in struct page as slub_cache. Maybe something like this? page = virt_to_head_page(x); #ifdef CONFIG_SLUB cachep = page->slub_cache; #else cachep = page->slab_cache; #endif if (slab_equal_or_root(cachep, s)) return cachep; > Given the overloaded overload situation this will require some deep > throught for newbies anyways. ;-) Yes, it's all quite entangled.