many tables vs large tables

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What is the general consensus around here ... to have many smaller tables, or have a few large tables?

I'm looking at a db model where a client has around 5500 spatial (PostGIS) tables, where the volume of each one varies greatly ... from a few hundred rows to over 250,000.

Speed is of the utmost importance. I'm investigating various options, like grouping the tables based on a common attribute or spatial type (POINT, LINESTRING, etc) into many multi-million tuple tables.

Or, table inheritance could be my friend here, in terms of performance. Ie. Using inheritance and constraint exclusion, the query planner could quickly isolate the tables of interest. It's orders of magnitude faster to perform a sequential scan through a relatively small table than it is to do an index scan on a large, likely unclustered table. The question is, can the underlying OS handle thousands of tables in a tablespace? Would it overwhelm the parser to perform constraint exclusion on 50-100 tables? Can it be done relatively quickly?

Clearly, serious testing is in order, but I just wanted to get a feel for things before I dive in.

Cheers,
Kevin

--
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux