I'm curious why you have so many partial indexes. Are you trying to make custom indexes per query? It seems to me you might want to consider making the indexes general, and remove the redundant ones (that have the same prefix list of indexed fields).
Secondly your table is 102Gb. Clearly there's a lot of data here. How many rows does that take? I would further suggest that you partition this table such that there are no more than about 10 million rows per partition (I've done this by using a id % 100 computation). Maybe in your case it makes sense to partition it based on the "what" field, because it appears you are trying to do that with your partial indexes already.
On Wed, Apr 19, 2017 at 10:11 PM, Samuel Williams <space.ship.traveller@xxxxxxxxx> wrote:
Okay, so after changing longitude/latitude to float4, and
re-organizing the table a bit, I got the query down from about 8
minutes to 40 seconds.
The details are in the gist comments:
https://gist.github.com/ioquatix/ bddda36d9e4ffaceb7a62d7b622591 21
Now, just need to get performance another 2 orders of magnitude
better. Can we make the index a bit more coarse grained, perhaps
convert long/lat to integers or something, use a hilbert index, use a
postgis index... ideas?
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general