Re: Large tables (was: RAID 0 not as fast as expected)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/18/06, Bucky Jordan <bjordan@xxxxxxxxxx> wrote:
My question is at what point do I have to get fancy with those big
tables? From your presentation, it looks like PG can handle 1.2 billion
records or so as long as you write intelligent queries. (And normal PG
should be able to handle that, correct?)

I would rephrase that: large databses are less forgiving of
unintelligent queries, particularly of the form of your average stupid
database abstracting middleware :-).  seek times on a 1gb database are
going to be zero all the time, not so on a 1tb+ database.

good normalization skills are really important for large databases,
along with materialization strategies for 'denormalized sets'.

regarding the number of rows, there is no limit to how much pg can
handle per se, just some practical limitations, especially vacuum and
reindex times.  these are important because they are required to keep
a handle on mvcc bloat and its very nice to be able to vaccum bits of
your database at a time.

just another fyi, if you have a really big database, you can forget
about doing pg_dump for backups (unless you really don't care about
being x day or days behind)...you simply have to due some type of
replication/failover strategy.  i would start with pitr.

merlin


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux