On Thu, Jan 7, 2021 at 2:48 AM Florian Fainelli <f.fainelli@xxxxxxxxx> wrote: > > Hi, > > First of all let me say that I am glad that someone is working on a > upstream solution for this issue, would appreciate if you could CC and > Jim Quinlan on subsequent submissions. Sure! > > On 1/5/21 7:41 PM, Claire Chang wrote: > > This series implements mitigations for lack of DMA access control on > > systems without an IOMMU, which could result in the DMA accessing the > > system memory at unexpected times and/or unexpected addresses, possibly > > leading to data leakage or corruption. > > > > For example, we plan to use the PCI-e bus for Wi-Fi and that PCI-e bus is > > not behind an IOMMU. As PCI-e, by design, gives the device full access to > > system memory, a vulnerability in the Wi-Fi firmware could easily escalate > > to a full system exploit (remote wifi exploits: [1a], [1b] that shows a > > full chain of exploits; [2], [3]). > > > > To mitigate the security concerns, we introduce restricted DMA. Restricted > > DMA utilizes the existing swiotlb to bounce streaming DMA in and out of a > > specially allocated region and does memory allocation from the same region. > > The feature on its own provides a basic level of protection against the DMA > > overwriting buffer contents at unexpected times. However, to protect > > against general data leakage and system memory corruption, the system needs > > to provide a way to restrict the DMA to a predefined memory region (this is > > usually done at firmware level, e.g. in ATF on some ARM platforms). > > Can you explain how ATF gets involved and to what extent it does help, > besides enforcing a secure region from the ARM CPU's perpsective? Does > the PCIe root complex not have an IOMMU but can somehow be denied access > to a region that is marked NS=0 in the ARM CPU's MMU? If so, that is > still some sort of basic protection that the HW enforces, right? We need the ATF support for memory MPU (memory protection unit). Restricted DMA (with reserved-memory in dts) makes sure the predefined memory region is for PCIe DMA only, but we still need MPU to locks down PCIe access to that specific regions. > > On Broadcom STB SoCs we have had something similar for a while however > and while we don't have an IOMMU for the PCIe bridge, we do have a a > basic protection mechanism whereby we can configure a region in DRAM to > be PCIe read/write and CPU read/write which then gets used as the PCIe > inbound region for the PCIe EP. By default the PCIe bridge is not > allowed access to DRAM so we must call into a security agent to allow > the PCIe bridge to access the designated DRAM region. > > We have done this using a private CMA area region assigned via Device > Tree, assigned with a and requiring the PCIe EP driver to use > dma_alloc_from_contiguous() in order to allocate from this device > private CMA area. The only drawback with that approach is that it > requires knowing how much memory you need up front for buffers and DMA > descriptors that the PCIe EP will need to process. The problem is that > it requires driver modifications and that does not scale over the number > of PCIe EP drivers, some we absolutely do not control, but there is no > need to bounce buffer. Your approach scales better across PCIe EP > drivers however it does require bounce buffering which could be a > performance hit. Only the streaming DMA (map/unmap) needs bounce buffering. I also added alloc/free support in this series (https://lore.kernel.org/patchwork/patch/1360995/), so dma_direct_alloc() will try to allocate memory from the predefined memory region. As for the performance hit, it should be similar to the default swiotlb. Here are my experiment results. Both SoCs lack IOMMU for PCIe. PCIe wifi vht80 throughput - MTK SoC tcp_tx tcp_rx udp_tx udp_rx w/o Restricted DMA 244.1 134.66 312.56 350.79 w/ Restricted DMA 246.95 136.59 363.21 351.99 Rockchip SoC tcp_tx tcp_rx udp_tx udp_rx w/o Restricted DMA 237.87 133.86 288.28 361.88 w/ Restricted DMA 256.01 130.95 292.28 353.19 The CPU usage doesn't increase too much either. Although I didn't measure the CPU usage very precisely, it's ~3% with a single big core (Cortex-A72) and ~5% with a single small core (Cortex-A53). Thanks! > > Thanks! > -- > Florian