Re: [PATCH -next] mm: usercopy: add a debugfs interface to bypass the vmalloc check.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We have implemented host-guest communication based on the TUN device
using XSK[1]. The hardware is a Kunpeng 920 machine (ARM architecture),
and the operating system is based on the 6.6 LTS version with kernel
version 6.6. The specific stack for hotspot collection is as follows:

-  100.00%     0.00%  vhost-12384  [unknown]      [k] 0000000000000000
   - ret_from_fork
      - 99.99% vhost_task_fn
         - 99.98% 0xffffdc59f619876c
            - 98.99% handle_rx_kick
               - 98.94% handle_rx
                  - 94.92% tun_recvmsg
                     - 94.76% tun_do_read
                        - 94.62% tun_put_user_xdp_zc
                           - 63.53% __check_object_size
                              - 63.49% __check_object_size.part.0
                                   find_vmap_area
                           - 30.02% _copy_to_iter
                                __arch_copy_to_user
                  - 2.27% get_rx_bufs
                     - 2.12% vhost_get_vq_desc
                          1.49% __arch_copy_from_user
                  - 0.89% peek_head_len
                       0.54% xsk_tx_peek_desc
                  - 0.68% vhost_add_used_and_signal_n
                     - 0.53% eventfd_signal
                          eventfd_signal_mask
            - 0.94% handle_tx_kick
               - 0.94% handle_tx
                  - handle_tx_copy
                     - 0.59% vhost_tx_batch.constprop.0
                          0.52% tun_sendmsg

It can be observed that most of the overhead is concentrated in the find_vmap_area function.

[1]: https://www.kernel.org/doc/html/latest/networking/af_xdp.html

在 2024/12/3 12:11, Matthew Wilcox 写道:
On Tue, Dec 03, 2024 at 10:31:59AM +0800, Ze Zuo wrote:
The commit 0aef499f3172 ("mm/usercopy: Detect vmalloc overruns") introduced
vmalloc check for usercopy. However, in subsystems like networking, when
memory allocated using vmalloc or vmap is subsequently copied using
functions like copy_to_iter/copy_from_iter, the check is triggered. This
adds overhead in the copy path, such as the cost of searching the
red-black tree, which increases the performance burden.

We found that after merging this patch, network bandwidth performance in
the XDP scenario significantly dropped from 25 Gbits/sec to 8 Gbits/sec,
the hardened_usercopy is enabled by default.

What is "the XDP scenario", exactly?  Are these large or small packets?
What's taking the time in find_vmap_area()?  Is it lock contention?





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux