On 3/4/20 3:26 AM, David Rientjes wrote:
On Wed, 4 Mar 2020, Jann Horn wrote:
Hi!
FYI, I noticed that if you do something like the following as root,
the system blows up pretty quickly with error messages about stuff
like corrupt freelist pointers because SLUB actually allows root to
force a page order that is smaller than what is required to store a
single object:
echo 0 > /sys/kernel/slab/task_struct/order
The other SLUB debugging options, like red_zone, also look kind of
suspicious with regards to races (either racing with other writes to
the SLUB debugging options, or with object allocations).
Thanks for the report, Jann. To address the most immediate issue,
allowing a smaller order than allowed, I think we'd need something like
this.
I can propose it as a formal patch if nobody has any alternate
suggestions?
---
mm/slub.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3598,7 +3598,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
*/
size = ALIGN(size, s->align);
s->size = size;
- if (forced_order >= 0)
+ if (forced_order >= slab_order(size, 1, MAX_ORDER, 1))
order = forced_order;
else
order = calculate_order(size);
Reviewed-by: Pekka Enberg <penberg@xxxxxx>