big tables in postgres and sub records.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We are working on a project which will have very big
tables (at least very big in my opinion). Initially we
estimate that in three tables of our database we will
have between 30 million to 60 million records (each
one). We think very soon we will have more than 100
million records in each of this tables. Also each
record will have on average 120 sub records. To avoid
even a bigger table size we are going to store these
sub records in main record. This sub records are
simple and are a pair of key and value. We are not
sure which one is better, to store as an array or a
serialized object (we are using java on server) or
other solution which we are not aware of it. There is
not a performance issue with this sub records as these
sub records will not participate in any query and the
only time we need them is when we have found the
record and are extracting it's complete details (or
insert/update).

So my question is,
As a database admin what would you do with this kind
of database and it's big tables? Is it a good idea to
try to break those tables to sub tables (dividing by
inheritance and say first letter of primary key), or
clustering is enough, Or any other solution?
And what is your opinion about sub records?

Thanks in advance.
David.


      ____________________________________________________________________________________
Park yourself in front of a world of choices in alternative vehicles. Visit the Yahoo! Auto Green Center.
http://autos.yahoo.com/green_center/ 


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux