Re: [pgsql-advocacy] Postgres and really huge tables

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Is there any experience with Postgresql and really huge tables? I'm talking about terabytes (plural) here in a single table. Obviously the table will be partitioned, and probably spread among several different file systems. Any other tricks I should know about?

We have a problem of that form here. When I asked why postgres wasn't being used, the opinion that postgres would "just <explicitive> die" was given. Personally, I'd bet money postgres could handle the problem (and better than the ad-hoc solution we're currently using). But I'd like a couple of replies of the form "yeah, we do that here- no problem" to wave around.

I've done a project using 8.1 on solaris that had a table that was closed to 2TB. The funny thing is that it just worked fine even without partitioning.

But, then again: the size of a single record was huge too: ~ 50K.
So there were not insanly many records: "just" something
in the order of 10ths of millions.

The queries just were done on some int fields, so the index of the
whole thing fit into RAM.

A lot of data, but not a lot of records... I don't know if that's
valid. I guess the people at Greenplum and/or Sun have more exciting
stories ;)


Bye, Chris.





[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux