Search Postgresql Archives

Operational performance: one big table versus many smaller tables

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If I have various record types that are "one up" records that are structurally similar (same columns) and are mostly retrieved one at a time by its primary key, is there any performance or operational benefit to having millions of such records split across multiple tables (say by their application-level purpose) rather than all in one big table? I am thinking of PG performance (handing queries against multiple tables each with hundreds of thousands or rows, versus queries against a single table with millions of rows), and operational performance (number of WAL files created, pg_dump, vacuum, etc.).

If anybody has any tips, I'd much appreciate it.

Thanks,
David

--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux