On Tue, 18 Mar 2014, Peter Zijlstra wrote: > > My gut reaction was that we'd probably be better served by putting > > resources in to systems with higher core counts rather than lots of RAM. > > I have encountered the occasional boot bug on my 1TB system, but it's > > far from a frequent occurrence, and even more infrequent to encounter > > things at runtime. > > > > Would folks agree with that? What kinds of tests, benchmarks, stress > > tests, etc... do folks run that are both valuable and can only be run on > > a system with a large amount of actual RAM? > > We had a sched-numa + kvm fail on really large systems the other day, > but yeah in general such problems tend to be rare. Then again, without > test coverage they will always be rare, for even if there were problems, > nobody would notice :-) SGI had systems out there up to few PB of RAM. There were a couple of tricks to get this going. Bootup time was pretty long. I/O has to be done carefully. The MM subsystem used to work with these sizes (I have not had a chance to verify that recently). This was Itanium with 64K page size so you had a factor of 16 less page structs to process. What I saw there is one of the reasons why I would like to see larger page support in the kernel. Managing massive amounts of 4k pages is creation far too much overhead. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>