From: jun qian <qianjun.kernel@xxxxxxxxx> In our project, Many business delays come from fork, so we started looking for the reason why fork is time-consuming. I used the ftrace with function_graph to trace the fork, found that the vm_normal_page will be called tens of thousands and the execution time of this vm_normal_page function is only a few nanoseconds. And the vm_normal_page is not a inline function. So I think if the function is inline style, it maybe reduce the call time overhead. I did the following experiment: use the bpftrace tool to trace the fork time : bpftrace -e 'kprobe:_do_fork/comm=="redis-server"/ {@st=nsecs;} \ kretprobe:_do_fork /comm=="redis-server"/{printf("the fork time \ is %d us\n", (nsecs-@st)/1000)}' no inline vm_normal_page: result: the fork time is 40743 us the fork time is 41746 us the fork time is 41336 us the fork time is 42417 us the fork time is 40612 us the fork time is 40930 us the fork time is 41910 us inline vm_normal_page: result: the fork time is 39276 us the fork time is 38974 us the fork time is 39436 us the fork time is 38815 us the fork time is 39878 us the fork time is 39176 us In the same test environment, we can get 3% to 4% of performance improvement. note:the test data is from the 4.18.0-193.6.3.el8_2.v1.1.x86_64, because my product use this version kernel to test the redis server, If you need to compare the latest version of the kernel test data, you can refer to the version 1 Patch. We need to compare the changes in the size of vmlinux: inline non-inline diff vmlinux size 9709248 bytes 9709824 bytes -576 bytes Signed-off-by: jun qian <qianjun.kernel@xxxxxxxxx> --- mm/memory.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/memory.c b/mm/memory.c index eeae590e526a..6ade9748d425 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -592,7 +592,7 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr, * PFNMAP mappings in order to support COWable mappings. * */ -struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, +inline struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, pte_t pte) { unsigned long pfn = pte_pfn(pte); -- 2.18.2