Hi Eli, Às 3:56 PM de 5/10/2017, Eli Cohen escreveu: > Hi Joao, > > > > Since mlx5 supported devices can do DMA with 64 bit addresses we start like this. This fails in your system since it is not capable of handling 64 bit addresses so we fall back to 32 bit addresses which then succeed. However what you are experiencing is the driver executed a command and firmware supposedly does not respond. Most likely the firmware responded but the driver could not see it due to problems related to dma addresses in your system. > > > > Long story short, there is a problem in your system. To investigate this further you might need heavy tools such as pcie analyzer. > > > I have new data for you. My colleague is using a Mellanox MT27800 Family (ConnectX-5) with Firmware version 16.19.148 and it does not have hangs, but it fails in the CPU mask: mlx5_core 0000:01:00.0: enabling device (0000 -> 0002) mlx5_core 0000:01:00.0: Warning: couldn't set 64-bit PCI DMA mask mlx5_core 0000:01:00.0: Warning: couldn't set 64-bit consistent PCI DMA mask mlx5_core 0000:01:00.0: firmware version: 16.19.148 mlx5_core 0000:01:00.0: Port module event: module 0, Cable unplugged mlx5_core 0000:01:00.0: mlx5_irq_set_affinity_hint:628:(pid 1): irq_set_affinity_hint failed,irq 0x0032 mlx5_core 0000:01:00.0: Failed to alloc affinity hint cpumask mlx5_core 0000:01:00.0: mlx5_load_one failed with error code -22 mlx5_core: probe of 0000:01:00.0 failed with error -22 Mine is a Mellanox MT28800 Family (ConnectX-5) with Firmware Version 16.19.21102. I think I have some firmware problem. The affinity problem might be due to my Root Complex driver does not support affinity at the moment. Thanks, Joao -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html