Re: [PATCH 00/16] IB/hfi1: Add a page pinning cache for PSM sdma

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 16, 2016 at 05:53:37PM +0200, Or Gerlitz wrote:
Taking a performance standpoint we don't want this in user space. If the
cache were at the user level there would end up being a system call to pin
the buffer, another to do the SDMA transfer from said buffer, and another to
unpin the buffer when the kernel notifier has invalidated that buffer.  This
gets even more complex when you consider cache evictions due to reaching the
limit on pinned pages.
These extra system calls add overhead, which may be acceptable for a normal
server or desktop environment, but are not acceptable when dealing with a
high performance interconnect where we want the absolute best performance.

I should also point out that these extra calls would be done on a per-buffer
basis, not per-context, or per-application.

The cache comes to amortize the cost of pin/unpin and such, correct? this means
that over longs runs, if there's nice locality of the same process
using the same pages,
you pay the register/pin cost once, later use lazy
de-registration/pinning, and only do that
when MMU notifier tells you this is a must, or under some eviction policy.

Since the cost is amortized, the system calls over-head should be negligible
(also, the same system call can be used to evict X and register Y), do you
have performance data that shows otherwise?

I don't personally have the data but will check with some folks here.

-Denny
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux