Indexes are on the partitions, my bad.
If you need to insert lots of data, it is faster to create the indexes
afterwards (and then you can also create them in parallel, since you have
lots of RAM and cores).
The explain plan looks like this:
explain SELECT * from bigTable
where
"time" >= extract ('epoch' from timestamp '2009-08-31 00:00:00')::int4
and "time" <= extract ('epoch' from timestamp '2009-08-31 23:59:59')::int
;
QUERY PLAN
------------------------------------------------------------------------------------------------
Index Scan using bigTable_time_index on bigTable (cost=0.00..184.04
rows=1
width=129)
Index Cond: (("time" >= 1251676800) AND ("time" <= 1251763199))
(2 rows)
What is slow, then, is it the insert or is it the select ?
Can you EXPLAIN ANALYZE the SELECT ?
If "bigTable" is not clustered on "time" you'll get lots of random
accesses, it'll be slow.
If you want to partition your huge data set by "time", and the data isn't
already ordered by "time" on disk, you could do this :
SET work_mem TO something very large like 10GB since you got 32GB RAM,
check your shared buffers etc first;
CREATE TABLE tmp AS SELECT * FROM bigTable ORDER BY "time"; <- huge sort,
will take some time
SET maintenance_work_mem TO something very large;
CREATE INDEX tmp_time ON tmp( "time" );
CREATE TABLE partition1 AS SELECT * FROM tmp WHERE "time" BETWEEN
beginning AND end;
(repeat...)
Since tmp is clustered on "time" you'll get a nice fast bitmap-scan, and
you won't need to seq-scan N times (or randomly index-scan) bigTable.
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance