Hi, relating to my previous queries on doing spatial searches on 10M
rows, it seems that most of my queries return within 2 minutes.
Generally this is not too bad, though faster is always better.
Interestingly, it appears that the CUBE index for the table in
question is about 3GB (the table itself is about 14GB). Not knowing
the details of the postgres internals, I assume that when a query
tries to use the index, it will need to read a 3GB file. Is this a
correct assumption?
In such a situation, is there a way to keep the index in memory? My
machine has 8GB installed and currently has about 7.4GB free RAM (64
bit linux 2.6.9)
A side effect of the size of the index is that if I do a query that
performs a seq scan (say using cube_distance) it takes longer than
when an index is used, but not significantly longer. And this is on a
10M row table.
What strategies do people follow when the index becomes very big?
Thanks,
-------------------------------------------------------------------
Rajarshi Guha <rguha@xxxxxxxxxxx>
GPG Fingerprint: 0CCA 8EE2 2EEB 25E2 AB04 06F7 1BB9 E634 9B87 56EE
-------------------------------------------------------------------
Does Ramanujan know Polish?
-- E.B. Ross
---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend