On Thu, Apr 23, 2015 at 09:12:38AM -0500, Christoph Lameter wrote: > On Wed, 22 Apr 2015, Paul E. McKenney wrote: > > > Agreed, the use case that Jerome is thinking of differs from yours. > > You would not (and should not) tolerate things like page faults because > > it would destroy your worst-case response times. I believe that Jerome > > is more interested in throughput with minimal change to existing code. > > As far as I know Jerome is talkeing about HPC loads and high performance > GPU processing. This is the same use case. The difference is sensitivity to latency. You have latency-sensitive HPC workloads, and Jerome is talking about HPC workloads that need high throughput, but are insensitive to latency. > > Let's suppose that you and Jerome were using GPGPU hardware that had > > 32,768 hardware threads. You would want very close to 100% of the full > > throughput out of the hardware with pretty much zero unnecessary latency. > > In contrast, Jerome might be OK with (say) 20,000 threads worth of > > throughput with the occasional latency hiccup. > > > > And yes, support for both use cases is needed. > > What you are proposing for High Performacne Computing is reducing the > performance these guys trying to get. You cannot sell someone a Volkswagen > if he needs the Ferrari. You do need the low-latency Ferrari. But others are best served by a high-throughput freight train. Thanx, Paul -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>