not using indexes on large table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I'm a bit new to PostgreSQL and database design in general so forgive me
for asking stupid questions. ;-)

I've setup a PostgreSQL database on a Linux machine (2 processor, 1GB
mem) and while the database itself resides on a NetApp filer, via NFS,
this doesn't seem to impact the performance to drastically.

I basically use it for indexed tables without any relation between 'em
so far this has worked perfectly.

For statistics I've created the following table:
volume varchar(30),
qtree varchar(255),
file varchar(512),
ctime timestamp,
mtime timestamp,
atime timestamp
annd created separate indexes on the volume and qtree columns.

This table gets filled with the copy command and about 2 hours and
some 40 million records later I issue a reindex command to make sure the
indexes are accurate. (for good interest, there are some 35 values for
volume and some 1450 for qtrees)

While filling of this table, my database grows to an (expected) 11.5GB.

The problems comes when I try to do a query without using a where clause
because by then, it completely discards the indexes and does a complete
table scan which takes over half an hour! (40.710.725 rows, 1110258
pages, 1715 seconds)

I've tried several things but doing a query like:
select distinct volume from project_access_times
or
select distinct qtree from project_access_times
always result in a full sequential table scan even after a 'vacuum' and
'vacuum analyze'.

I even tried the 'set enable_seqscan = no' but it still does a full
table scan instead of using the indexes.

Can anyone tell me if this is normal behaviour (half an hour seems over
the top to me) and if not, what I can do about it.

Regards,

Jeroen Kleijer


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux