On 12/18/2013 04:41 PM, Andrew Morton wrote: > So your scary patch series which shrinks struct page while retaining > the cmpxchg_double() might reclaim most of this loss? Well, this is cool. Except for 1 case out of 14 (1024 bytes with the alloc all / free all loops), my patched kernel either outperforms or matches both of the existing cases. To recap, we have two workloads, essentially the time to free an "old" kmalloc which is not cache-warm (mode=0) and the time to free one which is warm since it was just allocated (mode=1). This is tried for 3 different kernel configurations: 1. The default today, SLUB with a 64-byte 'struct page' using cmpxchg16 2. Same kernel source as (1), but with SLUB's compile-time options changed to disable CMPXCHG16 and not align 'struct page' 3. Patched kernel to internally align th SLUB data so that we can both have an unaligned 56-byte 'struct page' and use the CMPXCHG16 optimization. > https://docs.google.com/spreadsheet/ccc?key=0AgUCVXtr5IwedDNXb1FLNEFqVHdSNDF6YktYZTBndEE&usp=sharing I'll respin the patches a bit and send out another version with some small updates. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>