Re: Data Set Growth causing 26+hour runtime, on what we believe to be very simple SQL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Thu, Jan 21, 2010 at 7:46 PM, Craig Ringer <craig@xxxxxxxxxxxxxxxxxxxxx> wrote:

> Any assistance would be appreciated, don't worry about slapping me
> around I need to figure this out. Otherwise I'm buying new hardware
> where it may not be required.

What is the reporting query that takes 26 hours? You didn't seem to
include it, or any query plan information for it (EXPLAIN or EXPLAIN
ANALYZE results).
 
It's this query, run 6000 times with a diff makeid's 

SELECT COUNT(DISTINCT uid )  AS active_users FROM pixelpool.userstats    WHERE makeid ='bmw-ferman' AND tagged =true

                Plan

                "Aggregate  (cost=49467.00..49467.01 rows=1 width=8)"

                "  ->  Bitmap Heap Scan on userstats  (cost=363.49..49434.06 rows=13175 width=8)"

                "        Recheck Cond: (makeid = 'b1mw-ferman'::text)"

                "        Filter: tagged"

                "        ->  Bitmap Index Scan on idx_retargetuserstats_makeidtag  (cost=0.00..360.20 rows=13175 width=0)"

                "              Index Cond: ((makeid = 'b1mw-ferman'::text) AND (tagged = true))"



What sort of activity is happening on the db concurrently with your
tests? What's your max connection limit?

50 max and there is nothing, usually one person connected if that, otherwise it's a cron job that bulk inserts and than jobs later on run that generate the reports off the static data. No deletes or updates happening.
 

What're your shared_buffers and effective_cache_size settings?

shared_buffers = 1028MB  (Had this set at 128 and 256 and just recently bumped it higher, didn't buy me anything)
maintenance_work_mem = 128MB
fsync=on
random_page_cost = 4.0  
effective_cache_size = 7GB
default vac settings
 

Could sorts be spilling to disk? Check work_mem size and enable logging
of tempfiles (see the manual).

work_mem = 100MB                                # min 64kB

Will do and I guess it's possible but during the queries, reports I don't see a ton of writes, mostly reads

Does an explicit ANALYZE of the problem table(s) help?

It didn't.

Thanks
Tory


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux