Good morning,
We run postgres 9.4. We have a query that is running for more than 6 hours. I have downloaded the query, the explain analyze with buffers, the sizes of the principle tables (june29. tocommunity.op) the table descriptions of the principle tables, the postgresql.conf file, and the plpgsql function where the bottleneck is occurring (number_of_sends.plpgsql)
One observation I would like specific comments for: the buffers: shared hit on the explain shows 532 million hit buffers - could that be a clue to the 6 hour query? Why so many? The largest table (queuenodes) is only
6,500,000 pages. the next largest - emailrcpts is only 3,700,000 pages If this is a valid observation, is there a memory setting I could use to alleviate that? Or am I interpreting wrong. Another insight: when I run this query again, with cache benefit, the query runs in less than a minute and the buffers:shared hit rises to only 232,000.
Any observation, and / or explanation of the 'buffers' parameter of the explain would be appreciated.
--
Mark Steben
Database Administrator
@utoRevenue | Autobase
CRM division of Dominion Dealer Solutions
95D Ashley Ave.
West Springfield, MA 01089
t: 413.327-3045
f: 413.383-9567
Database Administrator
@utoRevenue | Autobase
CRM division of Dominion Dealer Solutions
95D Ashley Ave.
West Springfield, MA 01089
t: 413.327-3045
f: 413.383-9567
www.fb.com/DominionDealerSolutions
www.twitter.com/DominionDealer
www.drivedominion.com
Attachment:
qaq.june28.2nd.query.expl.op
Description: Binary data
Attachment:
tabledescriptions.op
Description: Binary data
Attachment:
postgresql.conf.cpy
Description: Binary data
Attachment:
number_of_sends.plpgsql
Description: Binary data
Attachment:
june29.tocommunity.op
Description: Binary data
Attachment:
qaq.june28.2nd.query
Description: Binary data