On Wed, 30 Jul 2014 18:26:39 -0500, Xin Tong said: > I am planning to use this only for workloads with very large memory > footprints, e.g. hadoop, tpcc, etc. You might want to look at how your system gets booted. I think you'll find that you burn through 800 to 2000 or so processes, all of which are currently tiny, but if you make every 4K allocation grab 2M instead, you're quite likely to find yourself tripping the OOM before hadoop ever gets launched. You're probably *much* better off letting the current code do its work, since you'll only pay the coalesce cost once for each 2M that hadoop uses. And let's face it, that's only going to sum up to fractions of a second, and then hadoop is going to be banging on the TLB for hours or days. Don't spend time optimizing the wrong thing....
Attachment:
pgpPMcWU9EEbD.pgp
Description: PGP signature
_______________________________________________ Kernelnewbies mailing list Kernelnewbies@xxxxxxxxxxxxxxxxx http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies