On Fri, Apr 24, 2015 at 09:01:47AM -0500, Christoph Lameter wrote: > On Thu, 23 Apr 2015, Paul E. McKenney wrote: > > > > As far as I know Jerome is talkeing about HPC loads and high performance > > > GPU processing. This is the same use case. > > > > The difference is sensitivity to latency. You have latency-sensitive > > HPC workloads, and Jerome is talking about HPC workloads that need > > high throughput, but are insensitive to latency. > > Those are correlated. In some cases, yes. But are you -really- claiming that -all- HPC workloads are highly sensitive to latency? That would be quite a claim! > > > What you are proposing for High Performacne Computing is reducing the > > > performance these guys trying to get. You cannot sell someone a Volkswagen > > > if he needs the Ferrari. > > > > You do need the low-latency Ferrari. But others are best served by a > > high-throughput freight train. > > The problem is that they want to run 2000 trains at the same time > and they all must arrive at the destination before they can be send on > their next trip. 1999 trains will be sitting idle because they need > to wait of the one train that was delayed. This reduces the troughput. > People really would like all 2000 trains to arrive on schedule so that > they get more performance. Yes, there is some portion of the market that needs both high throughput and highly predictable latencies. You are claiming that the -entire- HPC market has this sort of requirement? Again, this would be quite a claim! Thanx, Paul -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>