Re: Postgres and really huge tables

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2007-01-18 at 14:31, Brian Hurt wrote:
> Is there any experience with Postgresql and really huge tables?  I'm 
> talking about terabytes (plural) here in a single table.  Obviously the 
> table will be partitioned, and probably spread among several different 
> file systems.  Any other tricks I should know about?
> 
> We have a problem of that form here.  When I asked why postgres wasn't 
> being used, the opinion that postgres would "just <explicitive> die" was 
> given.  Personally, I'd bet money postgres could handle the problem (and 
> better than the ad-hoc solution we're currently using).  But I'd like a 
> couple of replies of the form "yeah, we do that here- no problem" to 
> wave around.

It really depends on what you're doing.

Are you updating every row by a single user every hour, or are you
updating dozens of rows by hundreds of users at the same time?

PostgreSQL probably wouldn't die, but it may well be that for certain
batch processing operations it's a poorer choice than awk/sed or perl.

If you do want to tackle it with PostgreSQL, you'll likely want to build
a truly fast drive subsystem.  Something like dozens to hundreds of
drives in a RAID-10 setup with battery backed cache, and a main server
with lots of memory on board.

But, really, it depends on what you're doing to the data.


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux