Re: PF_RING on Mellanox Card

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




NIC card model.
You might be knowing as its an VPI card we can configure it to run in Ethernet mode.

19:00.0 InfiniBand: Mellanox Technologies MT25418 [ConnectX VPI PCIe 2.0 2.5GT/s - IB DDR / 10GigE] (rev a0)
        Subsystem: Mellanox Technologies MT25418 [ConnectX VPI PCIe 2.0 2.5GT/s - IB DDR / 10GigE]
        Physical Slot: 7
        Flags: bus master, fast devsel, latency 0, IRQ 33
        Memory at fde00000 (64-bit, non-prefetchable) [size=1M]
        Memory at f7000000 (64-bit, prefetchable) [size=8M]
        Capabilities: [40] Power Management version 3
        Capabilities: [48] Vital Product Data
        Capabilities: [9c] MSI-X: Enable+ Count=256 Masked-
        Capabilities: [60] Express Endpoint, MSI 00
        Capabilities: [100] Alternative Routing-ID Interpretation (ARI)
        Kernel driver in use: mlx4_core
        Kernel modules: mlx4_core



On Mon, Nov 4, 2013 at 4:23 AM, <Valdis.Kletnieks@xxxxxx> wrote:
On Sat, 02 Nov 2013 00:48:04 -0700, Robert Clove said:

> I want to know has anyone used the PF_RING on the Mellanox NIC card?

You do realize that Mellanox has literally *dozens* of models of NIC cards, right?

What card exactly?

_______________________________________________
Kernelnewbies mailing list
Kernelnewbies@xxxxxxxxxxxxxxxxx
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux