[kernel 6.10.10][aarch64] PCIe Bridge - NVMe SSD - No SMMU (or IOMMU)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear friends, 

I am running Linux kernel 6.10.10 on a Cortex A53 single CPU running @ 1.3GHz.
The CPU is part of a SoC with a PCIe bridge (root port) from Synopsis (using compatible = "snps,dw-pcie").
A Gen4 SSD is connected to the PCIe RP so when I run lspci I see both the PCIe bridge and the SSD:
# lspci
00:00.0 PCI bridge: Device 1e7e:abcd (rev 01)
01:00.0 Non-Volatile memory controller: Sandisk Corp Device 5040 (rev 03)

The problem I am facing is that the NVMe driver fails to load:
[    0.862737][   T10] nvme nvme0: 1/0/0 default/read/poll queues
[    0.874457][    C0] could not locate request for tag 0xfff
[    0.879977][    C0] nvme nvme0: invalid id 65535 completed on queue 1
[   31.820058][    T8] nvme nvme0: I/O tag 128 (0080) opcode 0x2 (I/O Cmd) QID 1 timeout, aborting req_op:READ(0) size:4096
[   31.831882][    C0] nvme nvme0: Abort status: 0x0
[   62.540052][    T8] nvme nvme0: I/O tag 128 (0080) opcode 0x2 (I/O Cmd) QID 1 timeout, reset controller
[   62.596074][   T20] nvme nvme0: 1/0/0 default/read/poll queues
[   62.602059][    C0] could not locate request for tag 0xfff
[   62.607567][    C0] nvme nvme0: invalid id 65535 completed on queue 1
[   93.260066][    T8] nvme nvme0: I/O tag 129 (0081) QID 1 timeout, disable controller
[   93.274391][    T8] I/O error, dev nvme0n1, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[   93.283627][    T8] Buffer I/O error on dev nvme0n1, logical block 0, async page read
[   93.291676][   T20] nvme nvme0: failed to mark controller live state
[   93.298111][   T20] nvme nvme0: Disabling device after reset failure: -19
[   93.305094][   T20] Buffer I/O error on dev nvme0n1, logical block 0, async page read
[   93.313101][   T27]  nvme0n1: unable to read partition table

I found on drivers/nvme/host/pci.c that the address written to the device is the dma address which is different from the physical address.
Adding some prints to nvme_pci_configure_admin_queue:
[    0.825383][   T10] nvme nvme0: ----> sq_dma_addr 0xa36000 cq_dma_addr 0xa35000
[    0.832767][   T10] nvme nvme0: ----> phys sq = 0x8afd000 phys cq = 0x8af5000

After reading the DMA api howto page (https://docs.kernel.org/core-api/dma-api-howto.html) it is clear and understandable the use of Bus Address Space and IOMMU (or SMMU).
My SoC doesn't have SMMU and the mapping between the device and the physical address is 1:1 mapping.

So the question is:
How to disable the MMU and cause the kernel allocating dma addresses that match physical addresses.

Tried the following (each one separately) and none worked:
1. pass boot argument "iommu.passthrough=1"
2. Remove "IOMMU Hardware Support" from kernel configuration.
3. Enable "IOMMU Hardware Support" but set the CONFIG_IOMMU_DEFAULT_PASSTHROUGH=y

Thanks,
Lior.




[Index of Archives]     [Gstreamer Embedded]     [Linux MMC Devel]     [U-Boot V2]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux ARM Kernel]     [Linux OMAP]     [Linux SCSI]

  Powered by Linux