To prepare to implement byte sized index for managing the freelist of a slab, we should restrict the number of objects in a slab to be less or equal to 256, since byte only represent 256 different values. Setting the size of object to value equal or more than newly introduced SLAB_MIN_SIZE ensures that the number of objects in a slab is less or equal to 256 for a slab with 1 page. Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> diff --git a/mm/slab.c b/mm/slab.c index ec197b9..3cee122 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -157,6 +157,10 @@ #define ARCH_KMALLOC_FLAGS SLAB_HWCACHE_ALIGN #endif +/* We use byte sized index to manage the freelist of a slab */ +#define NR_PER_BYTE (1 << BITS_PER_BYTE) +#define SLAB_MIN_SIZE (PAGE_SIZE >> BITS_PER_BYTE) + /* * true if a page was allocated from pfmemalloc reserves for network-based * swap @@ -2016,6 +2020,10 @@ static size_t calculate_slab_order(struct kmem_cache *cachep, if (!num) continue; + /* We can't handler number of objects more than NR_PER_BYTE */ + if (num > NR_PER_BYTE) + break; + if (flags & CFLGS_OFF_SLAB) { /* * Max number of objs-per-slab for caches which @@ -2258,6 +2266,12 @@ __kmem_cache_create (struct kmem_cache *cachep, unsigned long flags) flags |= CFLGS_OFF_SLAB; size = ALIGN(size, cachep->align); + /* + * We want to restrict the number of objects in a slab to be equal or + * less than 256 in order to manage freelist via byte sized indexes. + */ + if (size < SLAB_MIN_SIZE) + size = ALIGN(SLAB_MIN_SIZE, cachep->align); left_over = calculate_slab_order(cachep, size, cachep->align, flags); -- 1.7.9.5 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>