Re: Large tables (was: RAID 0 not as fast as expected)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Bucky,

On 9/18/06 7:37 AM, "Bucky Jordan" <bjordan@xxxxxxxxxx> wrote:

> My question is at what point do I have to get fancy with those big
> tables? From your presentation, it looks like PG can handle 1.2 billion
> records or so as long as you write intelligent queries. (And normal PG
> should be able to handle that, correct?)

PG has limitations that will confront you at sizes beyond about a couple
hundred GB of table size, as will Oracle and others.

You should be careful to implement very good disk hardware and leverage
Postgres 8.1 partitioning and indexes intelligently as you go beyond 100GB
per instance.  Also be sure to set the random_page_cost parameter in
postgresql.conf to 100 or even higher when you use indexes, as the actual
seek rate for random access ranges between 50 and 300 for modern disk
hardware.  If this parameter is left at the default of 4, indexes will often
be used inappropriately.
   
> Also, does anyone know if/when any of the MPP stuff will be ported to
> Postgres, or is the plan to keep that separate?

The plan is to keep that separate for now, though we're contributing
technology like partitioning, faster sorting, bitmap index, adaptive nested
loop, and hybrid hash aggregation to make big databases work better in
Postgres. 

- Luke




[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux