On 11/15/2017 5:15 PM, Jason Gunthorpe wrote:
On Wed, Nov 15, 2017 at 03:10:30PM +0200, Yishai Hadas wrote:
The man page was updated with a detailed description on the above
capability, see PR:
https://github.com/linux-rdma/rdma-core/pull/250
Yah, that is nicer
Here is some copy-editing:
Extended device capability flags (device_cap_flags_ex):
.br
.TP 7
IBV_DEVICE_PCI_WRITE_END_PADDING
Indicates the device has support for padding PCI writes to a full cache line.
Padding packets to full cache lines reduces the amount of traffic required at
the memory controller at the expense of creating more traffic on the PCI-E
port.
Workloads that have a high CPU memory load and low PCI-E utilization will
benefit from this feature, while workloads that have a high PCI-E utilization
and small packets will be harmed.
For instance, with a 128 byte cache line size, the transfer of any packets
less than 128 bytes will require a full 128 transfer on PCI, pontentially
doubling the required PCI-E bandwith.
This feature can be enabled on a QP or WQ basis via the
IBV_QP_CREATE_PCI_WRITE_END_PADDING or IBV_WQ_FLAGS_PCI_WRITE_END_PADDING
flags.
OK, PR was updated accordingly.
https://github.com/linux-rdma/rdma-core/pull/250
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html