On Thu, 21 Feb 2019, Rik van Riel wrote: > On Thu, 2019-02-21 at 18:15 +0000, Christopher Lameter wrote: > > > B) Provide fast memory in the NIC > > > > Since the NIC is at capacity limits when it comes to pushing data > > from the NIC into memory the obvious solution is to not go to main > > memory but provide faster on NIC memory that can then be accessed > > from the host as needed. Now the applications creates I/O > > bottlenecks > > when accessing their data or they need to implement complicated > > transfer mechanisms to retrieve and store data onto the NIC > > memory. > > Don't Intel and AMD both have High Bandwidth Memory > available? Well that is another problem that I omitted from the new revision. Yes but that memory is special with different performance characteristics and often also represented as another NUMA node. > Is it possible to place your network buffer in HBM, > and process the data from there? Ok but there is still the I/O bottleneck. So you can either have the HBM on the host processor (Xeon Phi solution) in a special NUMA node. Or you put the HBM onto the NIC and address it via PCI-E from the host processor (which means slower access for the host but fast writes from the network)