On Sun, Mar 16, 2008 at 1:02 PM, Karl Denninger <karl@xxxxxxxxxxxxx> wrote: > > > > The key issue on RAM is not whether the database will fit into RAM (for > all but the most trivial applications, it will not) I would argue that many applications where the data fits into memory are not trivial. Especially if we're talking about the working set. If you operate on 1 Gig sets out of a terabyte range for a reporting database, then your data fits into (or damned well should :) ) memory. Also, many applications with small datasets can be quite complex, like control systems. The actual amount of might be 100 Meg, but the throughput might be very high, and require a battery backed cache because of all the writes going in. So there are plenty of times your data will fit in memory. > It is whether the key INDICES will fit into RAM. If they will, then you > get a HUGE win in performance. When they don't, you often need to start looking at some form of partitioning if you want to keep good performance. By partitioning I'm not just limiting that to using inherited tables to do it, it could include things like horizontal partitioning of data across different pg servers. Note that I'm not disagreeing with everything you said, just a slight clarification on data sets that do / don't fit into memory. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general