Search Postgresql Archives

Re: Thousands of schemas and ANALYZE goes out of memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 2, 2012 at 5:09 PM, Jeff Janes <jeff.janes@xxxxxxxxx> wrote:

> I don't know how the transactionality of analyze works.  I was
> surprised to find that I even could run it in an explicit transaction
> block, I thought it would behave like vacuum and create index
> concurrently in that regard.
>
> However, I think that that would not solve your problem.  When I run
> analyze on each of 220,000 tiny tables by name within one session
> (using autocommit, so each in a transaction), it does run about 4
> times faster than just doing a database-wide vacuum which covers those
> same tables.  (Maybe this is the lock/resource manager issue that has
> been fixed for 9.3?)

For the record, the culprit that causes "analyze;" of a database with
a large number of small objects to be quadratic in time is
"get_tabstat_entry" and it is not fixed for 9.3.

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux