Matthew Wakeling <matthew@xxxxxxxxxxx> writes: > On Tue, 9 Dec 2008, Scott Marlowe wrote: >> I wonder how many hard drives it would take to be CPU bound on random >> access patterns? About 40 to 60? And probably 15k / SAS drives to >> boot. Cause that's what we're looking at in the next few years where >> I work. > > There's a problem with that thinking. That is, in order to exercise many > spindles, you will need to have just as many (if not more) concurrent requests. > And if you have many concurrent requests, then you can spread them over > multiple CPUs. So it's more a case of "How many hard drives PER CPU". It also > becomes a matter of whether Postgres can scale that well. Well: $ units 2445 units, 71 prefixes, 33 nonlinear units You have: 8192 byte/5ms You want: MB/s * 1.6384 / 0.61035156 At 1.6MB/s per drive if find Postgres is cpu-bound doing sequential scans at 1GB/s you'll need about 640 drives to keep one cpu satisfied doing random I/O -- assuming you have perfect read-ahead and the read-ahead itself doesn't add cpu overhead. Both of which are false of course, but at least in theory that's what it'll take. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's On-Demand Production Tuning -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance