It would be interesting to know what the bottleneck is for temp tables
for you. They do not go via the buffer-cache, they are stored in
private memory in the backend, they are not xlogged. Nor flushed to
disk on backend exit. They're about as close to in-memory tables as
you're going to get...
Hum...
Timings are a mean over 100 queries, including roundtrip to localhost,
via a python script.
0.038 ms BEGIN
0.057 ms SELECT 1
0.061 ms COMMIT
0.041 ms BEGIN
0.321 ms SELECT count(*) FROM bookmarks
0.080 ms COMMIT
this test table contains about 250 rows
0.038 ms BEGIN
0.378 ms SELECT * FROM bookmarks ORDER BY annonce_id DESC LIMIT 20
0.082 ms COMMIT
the ORDER BY uses an index
0.042 ms BEGIN
0.153 ms DECLARE tmp SCROLL CURSOR WITHOUT HOLD FOR SELECT * FROM
bookmarks ORDER BY annonce_id DESC LIMIT 20
0.246 ms FETCH ALL FROM tmp
0.048 ms MOVE FIRST IN tmp
0.246 ms FETCH ALL FROM tmp
0.048 ms CLOSE tmp
0.084 ms COMMIT
the CURSOR is about as fast as a simple query
0.101 ms BEGIN
1.451 ms CREATE TEMPORARY TABLE tmp ( a INTEGER NOT NULL, b INTEGER NOT
NULL, c TIMESTAMP NOT NULL, d INTEGER NOT NULL ) ON COMMIT DROP
0.450 ms INSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id DESC
LIMIT 20
0.443 ms ANALYZE tmp
0.365 ms SELECT * FROM tmp
0.310 ms DROP TABLE tmp
32.918 ms COMMIT
CREATING the table is OK, but what happens on COMMIT ? I hear the disk
seeking frantically.
With fsync=off, I get this :
0.090 ms BEGIN
1.103 ms CREATE TEMPORARY TABLE tmp ( a INTEGER NOT NULL, b INTEGER NOT
NULL, c TIMESTAMP NOT NULL, d INTEGER NOT NULL ) ON COMMIT DROP
0.439 ms INSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id DESC
LIMIT 20
0.528 ms ANALYZE tmp
0.364 ms SELECT * FROM tmp
0.313 ms DROP TABLE tmp
0.688 ms COMMIT
Getting closer ?
I'm betting on system catalogs updates. I get the same timings with
ROLLBACK instead of COMMIT. Temp tables have a row in pg_class...
Another temporary table wart :
BEGIN;
CREATE TEMPORARY TABLE tmp ( a INTEGER NOT NULL, b INTEGER NOT NULL, c
TIMESTAMP NOT NULL, d INTEGER NOT NULL ) ON COMMIT DROP;
INSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id DESC LIMIT 20;
EXPLAIN ANALYZE SELECT * FROM tmp;
QUERY PLAN
---------------------------------------------------------------------------------------------------
Seq Scan on tmp (cost=0.00..25.10 rows=1510 width=20) (actual
time=0.003..0.006 rows=20 loops=1)
Total runtime: 0.030 ms
(2 lignes)
ANALYZE tmp;
EXPLAIN ANALYZE SELECT * FROM tmp;
QUERY PLAN
------------------------------------------------------------------------------------------------
Seq Scan on tmp (cost=0.00..1.20 rows=20 width=20) (actual
time=0.003..0.008 rows=20 loops=1)
Total runtime: 0.031 ms
We see that the temp table has a very wrong estimated rowcount until it
has been ANALYZED.
However, temporary tables do not support concurrent access (obviously) ;
and in the case of on-commit-drop tables, inserts can't be rolled back
(obviously), so an accurate rowcount could be maintained via a simple
counter...