Re: Large table performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Title: Message
Have you run vacuum and analyze on the table?  What version of Postgres are you running?  What OS are you using?
 
This looks like a straight forward query.  With any database the first time you run the query its going to be slower because it actually has to read off disk.  The second time its faster because some or all of the data/indexes will be cached.  However 10 seconds sounds like a long time for pulling 10,000 records out of a table of 3 million.  If you post an EXPLAIN ANALYZE, it might give us a clue.
 
Dave
 
-----Original Message-----
From: pgsql-performance-owner@xxxxxxxxxxxxxx [mailto:pgsql-performance-owner@xxxxxxxxxxxxxx] On Behalf Of Mark Dobbrow
Sent: Friday, January 12, 2007 6:31 PM
To: pgsql-performance@xxxxxxxxxxxxxx
Subject: [PERFORM] Large table performance

Hello -

I have a fairly large table (3 million records), and am fetching 10,000 non-contigous records doing a simple select on an indexed column ie

select grades from large_table where teacher_id = X

This is a test database, so the number of records is always 10,000 and i have 300 different teacher ids.

The problem is, sometimes fetching un-cached records takes 0.5 secs and sometimes (more often) is takes more like 10.0 seconds

(fetching the same records for a given teacher_id a second time takes about 0.25 secs)

Has anyone seen similar behavior or know what the solution might be?

any help much appreciated,
Mark



ps. My shared_buffers is set at 5000 (kernal max), and work_mem=8192



[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux