>Yes. What's pretty large? We've had to redefine large recently, now we're >talking about systems with between 100TB and 1,000TB. > >- Luke Well, I said large, not gargantuan :) - Largest would probably be around a few TB, but the problem I'm having to deal with at the moment is large numbers (potentially > 1 billion) of small records (hopefully I can get it down to a few int4's and a int2 or so) in a single table. Currently we're testing for and targeting in the 500M records range, but the design needs to scale to 2-3 times that at least. I read one of your presentations on very large databases in PG, and saw mention of some tables over a billion rows, so that was encouraging. The new table partitioning in 8.x will be very useful. What's the largest DB you've seen to date on PG (in terms of total disk storage, and records in largest table(s) )? My question is at what point do I have to get fancy with those big tables? From your presentation, it looks like PG can handle 1.2 billion records or so as long as you write intelligent queries. (And normal PG should be able to handle that, correct?) Also, does anyone know if/when any of the MPP stuff will be ported to Postgres, or is the plan to keep that separate? Thanks, Bucky