Re: Should I expect lower bandwidth when using IBV_QPT_RAW_PACKET and steering rule?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey Terry,

On 3/17/20 2:07 PM, Terry Toole wrote:
> Hi,
> I am trying to understand if I should expect lower bandwidth from a
> setup when using IBV_QPT_RAW_PACKET and a steering rule which looks at
> the MAC addresses of
> the source and destination NICs. I am trying out an example program
> from the Mellanox community website which uses these features. For the
> code example, please see
> 
> https://community.mellanox.com/s/article/raw-ethernet-programming--basic-introduction---code-example
> 
> In my test setup, I am using two Mellanox MCX515A-CCAT NICs, which have a
> maximum bandwidth of 100 Gbps. They are installed in two linux computers which
> are connected by a single cable (no switch or router). Previously,
> when I was running
> tests like ib_send_bw or ib_write_bw and using UD, UC, or RC transport
> mode, I was seeing bandwidths ~90Gbps or higher. With the example in
> Raw Ethernet Programming: Basic introduction, after adding some code
> to count packets and measure time, I am seeing bandwidths around 10
> Gbps. I have been playing with different parameters such as MTU,
> packet size, or IBV_SEND_INLINE. I am wondering if the reduction in
> bandwidth is due to the packet filtering being done by the steering

While steering requires more work from the HW (if a packet hits too many
steering rules before being directed to a TIR/RQ it might affect the BW)
A single steering rule shouldn't.

> rule? Or should I expect to see similar bandwidths (~90 Gbps) as in my
> earlier tests and the problem is one of lack of optimization of my
> setup?

More like lack of optimizations in the test program you've used.
When talking about sending traffic using RAW_ETHERNET QP there are lot of optimizations
that can/should be done.

If you would like to have a look at a highly optimized datapath from userspace:
https://github.com/DPDK/dpdk/blob/master/drivers/net/mlx5/mlx5_rxtx.c

With the right code you should have no issue reaching line rate speeds with raw_ethernet QPs

Mark
 
> 
> Thanks for any help you can provide.
> 



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux