Andrea Arcangeli <andrea@xxxxxxx> writes: > On Sat, Sep 15, 2007 at 10:14:44PM +0200, Goswin von Brederlow wrote: >> - Userspace allocates a lot of memory in those slabs. > > If with slabs you mean slab/slub, I can't follow, there has never been > a single byte of userland memory allocated there since ever the slab > existed in linux. This and other comments in your reply show me that you completly misunderstood what I was talking about. Look at http://www.skynet.ie/~mel/anti-frag/2007-02-28/page_type_distribution.jpg The red dots (pinned) are dentries, page tables, kernel stacks, whatever kernel stuff, right? The green dots (movable) are mostly userspace pages being mapped there, right? What I was refering too is that because movable objects (green dots) aren't moved out of a mixed group (the boxes) when some unmovable object needs space all the groups become mixed over time. That means the unmovable objects are spread out over all the ram and the buddy system can't recombine regions when unmovable objects free them. There will nearly always be some movable objects in the other buddy. The system of having unmovable and movable groups breaks down and becomes useless. I'm assuming here that we want the possibility of larger order pages for unmovable objects (large continiuos regions for DMA for example) than the smallest order user space gets (or any movable object). If mmap() still works on 4k page bounaries then those will fragment all regions into 4k chunks in the worst case. Obviously if userspace has a minimum order of 64k chunks then it will never break any region smaller than 64k chunks and will never cause a fragmentation catastroph. I know that is verry roughly your aproach (make order 0 bigger), and I like it, but it has some limits as to how big you can make it. I don't think my system with 1GB ram would work so well with 2MB order 0 pages. But I wasn't refering to that but to the picture. MfG Goswin - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html