Re: postgresql is slow with larger table even it is in RAM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 25, 2008 at 3:35 AM, sathiya psql <sathiya.psql@xxxxxxxxx> wrote:
> Dear Friends,
>      I have a table with 32 lakh record in it. Table size is nearly 700 MB,
> and my machine had a 1 GB + 256 MB RAM, i had created the table space in
> RAM, and then created this table in this RAM.
>
>     So now everything is in RAM, if i do a count(*) on this table it returns
> 327600 in 3 seconds, why it is taking 3 seconds ????? because am sure that
> no Disk I/O is happening. ( using vmstat i had confirmed, no disk I/O is
> happening, swap is also not used )
>
> Any Idea on this ???
>
> I searched a lot in newsgroups ... can't find relevant things.... ( because
> everywhere they are speaking about disk access speed, here i don't want to
> worry about disk access )
>
>  If required i will give more information on this.

Two things:

- Are you VACUUM'ing regularly? It could be that you have a lot of
dead rows and the table is spread out over a lot of pages of mostly
dead space. That would cause *very* slow seq scans.

- What is your shared_buffers set to? If it's really low then postgres
could be constantly swapping from ram-disk to memory. Not much would
be cached, and performance would suffer.

FWIW, I did a select count(*) on a table with just over 300000 rows,
and it only took 0.28 sec.

Peter

-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux