On 04/11/2016 11:37 AM, Jesper Dangaard Brouer wrote:
On Mon, 11 Apr 2016 14:46:25 -0300
Thadeu Lima de Souza Cascardo <cascardo@xxxxxxxxxx> wrote:
So, Jesper, please take into consideration that this pool design
would rather be per device. Otherwise, we allow some device to write
into another's device/driver memory.
Yes, that was my intended use. I want to have a page-pool per device.
I actually, want to go as far as a page-pool per NIC HW RX-ring queue.
Because the other use-case for the page-pool is zero-copy RX.
The NIC HW trick is that we today can create a HW filter in the NIC
(via ethtool) and place that traffic into a separate RX queue in the
NIC. Lets say matching NFS traffic or guest traffic. Then we can allow
RX zero-copy of these pages, into the application/guest, somehow
binding it to RX queue, e.g. introducing a "cross-domain-id" in the
page-pool page that need to match.
I think it is important to keep in mind that using a page pool for
zero-copy RX is specific to protocols that are based on TCP/IP.
Protocols like FC, SRP and iSER have been designed such that the side
that allocates the buffers also initiates the data transfer (the target
side). With TCP/IP however transferring data and allocating receive
buffers happens by opposite sides of the connection.
Bart.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>