On Thu, 3 Dec 2015, Jesper Dangaard Brouer wrote: > +void kmem_cache_free_bulk(struct kmem_cache *orig_s, size_t size, void **p) orig_s? Thats strange > +{ > + struct kmem_cache *s; s? > + size_t i; > + > + local_irq_disable(); > + for (i = 0; i < size; i++) { > + void *objp = p[i]; > + > + s = cache_from_obj(orig_s, objp); Does this support freeing objects from a set of different caches? Otherwise there needs to be a check in here that the objects come from the same cache. > + > + debug_check_no_locks_freed(objp, s->object_size); > + if (!(s->flags & SLAB_DEBUG_OBJECTS)) > + debug_check_no_obj_freed(objp, s->object_size); > + > + __cache_free(s, objp, _RET_IP_); The function could be further optimized if you take the code from __cache_free() and move stuff outside of the loop. The alien cache check f.e. and the Pfmemalloc checking may be moved out. The call to virt_to_head page may also be avoided if the objects are on the same page as the last. So you may be able to function calls for the fastpath in the inner loop which may accelerate frees significantly. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>