On Wed, 21 Mar 2018, Christopher Lameter wrote: > On Wed, 21 Mar 2018, Mikulas Patocka wrote: > > > > > F.e. you could optimize the allcations > 2x PAGE_SIZE so that they do not > > > > allocate powers of two pages. It would be relatively easy to make > > > > kmalloc_large round the allocation to the next page size and then allocate > > > > N consecutive pages via alloc_pages_exact() and free the remainder unused > > > > pages or some such thing. > > > > alloc_pages_exact() has O(n*log n) complexity with respect to the number > > of requested pages. It would have to be reworked and optimized if it were > > to be used for the dm-bufio cache. (it could be optimized down to O(log n) > > if it didn't split the compound page to a lot of separate pages, but split > > it to a power-of-two clusters instead). > > Well then a memory pool of page allocator requests may address that issue? > > Have a look at include/linux/mempool.h. I know the mempool interface. Mempool can keep some amount reserved objects if the system memory is exhausted. Mempool doesn't deal with object allocation at all - mempool needs to be hooked to an existing object allocator (slab cache, kmalloc, alloc_pages, or some custom allocator provided with the methods mempool_alloc_t and mempool_free_t). > > > I don't know if that's a good idea. That will contribute to fragmentation > > > if the allocation is held onto for a short-to-medium length of time. > > > If the allocation is for a very long period of time then those pages > > > would have been unavailable anyway, but if the user of the tail pages > > > holds them beyond the lifetime of the large allocation, then this is > > > probably a bad tradeoff to make. > > Fragmentation is sadly a big issue. You could create a mempool on bootup > or early after boot to ensure that you have a sufficient number of > contiguous pages available. The dm-bufio driver deals correctly with this - it preallocates several buffers with vmalloc when the dm-bufio cache is created. During operation, if a high-order allocation fails, the dm-bufio subsystem throws away some existing buffer and reuses the already allocated chunk of memory for the buffer that needs to be created. So, fragmentation is not an issue with this use case. dm-bufio can make forward progress even if memory is totally exhausted. > > The problem with alloc_pages_exact() is that it exhausts all the > > high-order pages and leaves many free low-order pages around. So you'll > > end up in a system with a lot of free memory, but with all high-order > > pages missing. As there would be a lot of free memory, the kswapd thread > > would not be woken up to free some high-order pages. > > I think that logic is properly balanced and will take into account pages > that have been removed from the LRU expiration logic. > > > I think that using slab with high order is better, because it at least > > doesn't leave many low-order pages behind. > > Any request to the slab via kmalloc with a size > 2x page size will simply > lead to a page allocator request. You have the same issue. If you want to > rely on the slab allocator buffering large segments for you then a mempool > will also solve the issue for you and you have more control over the pool. mempool solves nothing because it needs a backing allocator. And the question is what this backing allocator should be. > > BTW. it could be possible to open the file > > "/sys/kernel/slab/<cache>/order" from the dm-bufio kernel driver and write > > the requested value there, but it seems very dirty. It would be better to > > have a kernel interface for that. > > Hehehe you could directly write to the kmem_cache structure and increase > the order. AFAICT this would be dirty but work. > > But still the increased page order will get you into trouble with > fragmentation when the system runs for a long time. That is the reason we > try to limit the allocation sizes coming from the slab allocator. It won't - see above - if the high-order allocation fails, dm-bufio just reuses some existing buffer. Mikulas -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel