> So, this patch open codes the kvmalloc() in the commit path to have > the above described behaviour. The result is we more than halve the > CPU time spend doing kvmalloc() in this path and transaction commits > with 64kB objects in them more than doubles. i.e. we get ~5x > reduction in CPU usage per costly-sized kvmalloc() invocation and > the profile looks like this: > > - 37.60% xlog_cil_commit > 16.01% memcpy_erms > - 8.45% __kmalloc > - 8.04% kmalloc_order_trace > - 8.03% kmalloc_order > - 7.93% alloc_pages > - 7.90% __alloc_pages > - 4.05% __alloc_pages_slowpath.constprop.0 > - 2.18% get_page_from_freelist > - 1.77% wake_all_kswapds > .... > - __wake_up_common_lock > - 0.94% _raw_spin_lock_irqsave > - 3.72% get_page_from_freelist > - 2.43% _raw_spin_lock_irqsave > - 5.72% vmalloc > - 5.72% __vmalloc_node_range > - 4.81% __get_vm_area_node.constprop.0 > - 3.26% alloc_vmap_area > - 2.52% _raw_spin_lock > - 1.46% _raw_spin_lock > 0.56% __alloc_pages_bulk > - 4.66% kvfree > - 3.25% vfree OK, i see. I tried to use the fs_mark in different configurations. For example: <snip> time fs_mark -D 10000 -S0 -n 100000 -s 0 -L 32 -d ./scratch/0 -d ./scratch/1 -d ./scratch/2 \ -d ./scratch/3 -d ./scratch/4 -d ./scratch/5 -d ./scratch/6 -d ./scratch/7 -d ./scratch/8 \ -d ./scratch/9 -d ./scratch/10 -d ./scratch/11 -d ./scratch/12 -d ./scratch/13 \ -d ./scratch/14 -d ./scratch/15 -t 64 -F <snip> But i did not manage to trigger xlog_cil_commit() to fallback to vmalloc code. I think i should reduce an amount of memory on my kvm-pc and repeat the tests! -- Uladzislau Rezki