Search Postgresql Archives

Re: Maximum number of tables

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Manos Karpathiotakis <mk@xxxxxxxxx> writes:
> Hi all,
> I am using postgres 9.0 under CentOS 5.5 (Dual Xeon Quad Core @ 2,44 GHz,
> 64GB RAM, 2TB RAID 5). In my case, postgres is used as a backend for the RDF
> store Sesame.
> I am trying to store a dataset that consists of approximately 900.000.000
> insertions (organized in either 10 tables, or in an arbitrary number of
> tables exceeding 2000). Loading the data in 10 tables takes about two days,
> while in the 2nd case, an org.postgresql.util.PSQLException: ERROR: out of
> shared memory error is thrown.

I'm guessing that you're seeing this through some client-side code that
helpfully suppresses the HINT about raising max_locks_per_transaction
:-(

If your application tries to touch all 2000 tables in one transaction,
you will need to raise that parameter to avoid running out of locktable
space for the AccessShareLocks it needs on all those tables.

However ... generally speaking, I'd guess that whatever you did to
refactor 10 tables into 2000 was a bad idea --- one table with an extra
key column is almost always a better design than N basically-identical
tables.  The latter will be a *lot* harder to use, and probably won't
fix your performance problem anyway.  You should attack the performance
problem in a different way.  Have you read
http://www.postgresql.org/docs/9.0/static/populate.html ?

			regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux