On 2023-10-25 00:19, Uwe Kleine-König wrote: > Hello, > > in https://bugs.debian.org/1015871 the Debian kernel team got a request > to enable PCI_P2PDMA. Given the description of the feature and also the > "If unsure, say N." I wonder if you consider it safe to enable this > option. I don't know. Not being a security expert, I'd say the attack surface exposed is fairly minimal. Most of what goes on is internal to the kernel. So the main risk is the same rough risk that goes with enabling any feature: there may be bugs. My opinion is that 'No' is recommended because the feature is still very nascent and advanced. Right now it enables two user visible niche features: p2p transfers in nvme-target between an NVMe device and an RDMA NIC and transferring buffers between two NVMe devices through the CMB via O_DIRECT. Both uses require an NVMe device with CMB memory, which is rare. Anyone using this option to do GPU P2PDMA transfers are certainly using out of tree (and likely proprietary) modules as the upstream kernel does not yet appear to support anything like that at this time. Thus it's not clear how such code is using the P2PDMA subsystem or what implications there may be. It's not commonly the case that using these features increases throughput as CMB memory is usually much slower than system memory. It's use makes more sense in smaller/cheaper boutique systems where the system memory or bus bandwidth to the CPU is limited. Typically with a PCIe switch involved. In addition to the above, P2PDMA transfers are only allowed by the kernel for traffic that flows through certain host bridges that are known to work. For AMD, all modern CPUs are on this list, but for Intel, the list is very patchy. When using a PCIe switch (also uncommon) this restriction is not present seeing the traffic can avoid the host bridge. Thus, my contention is anyone experimenting with this stuff ought to be capable of installing a custom kernel with the feature enabled. Logan