On Mon, 1 Oct 2007 13:55:29 -0700 (PDT) Christoph Lameter <clameter@xxxxxxx> wrote: > On Sat, 29 Sep 2007, Andrew Morton wrote: > > > > atomic allocations. And with SLUB using higher order pages, atomic !0 > > > order allocations will be very very common. > > > > Oh OK. > > > > I thought we'd already fixed slub so that it didn't do that. Maybe that > > fix is in -mm but I don't think so. > > > > Trying to do atomic order-1 allocations on behalf of arbitray slab caches > > just won't fly - this is a significant degradation in kernel reliability, > > as you've very easily demonstrated. > > Ummm... SLAB also does order 1 allocations. We have always done them. > > See mm/slab.c > > /* > * Do not go above this order unless 0 objects fit into the slab. > */ > #define BREAK_GFP_ORDER_HI 1 > #define BREAK_GFP_ORDER_LO 0 > static int slab_break_gfp_order = BREAK_GFP_ORDER_LO; Do slab and slub use the same underlying page size for each slab? Single data point: the CONFIG_SLAB boxes which I have access to here are using order-0 for radix_tree_node, so they won't be failing in the way in which Peter's machine is. I've never ever before seen reports of page allocation failures in the radix-tree node allocation code, and that's the bottom line. This is just a drop-dead must-fix show-stopping bug. We cannot rely upon atomic order-1 allocations succeeding so we cannot use them for radix-tree nodes. Nor for lots of other things which we have no chance of identifying. Peter, is this bug -mm only, or is 2.6.23 similarly failing? - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html