Re: Should I expect lower bandwidth when using IBV_QPT_RAW_PACKET and steering rule?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey Dimitris,

On 3/20/2020 08:40, Dimitris Dimitropoulos wrote:
> Hi Mark,
> 
> Just a clarification: when you say reach line rates speeds, you mean
> with no packet drops ?

Yep* you can have a look at the following PDF for some numbers:
https://fast.dpdk.org/doc/perf/DPDK_19_08_Mellanox_NIC_performance_report.pdf

*I just want to point out that in the real world (as opposed to the tests/applications used during the measurements
inside the PDF) usually there is a lot more processing inside the application itself which can
make it harder to handle large number of packets and still saturate the wire.
Once you take into account packet sizes, MTU, servers configuration/hardware etc it can get a bit tricky
but not impossible.

Just to add that currently we have support and ability to insert millions for flow steering rules with very
high update rate, but you have to use the DV APIs to achieve that:
https://github.com/linux-rdma/rdma-core/blob/master/providers/mlx5/man/mlx5dv_dr_flow.3.md

Mark

> 
> Thanks
> Dimitris
> 
> On Tue, Mar 17, 2020 at 4:40 PM Mark Bloch <markb@xxxxxxxxxxxx> wrote:
>> If you would like to have a look at a highly optimized datapath from userspace:
>> https://github.com/DPDK/dpdk/blob/master/drivers/net/mlx5/mlx5_rxtx.c
>>
>> With the right code you should have no issue reaching line rate speeds with raw_ethernet QPs
>>
>> Mark



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux