Scott, Vick, the vast majority of the data is generic. But there are some specific events we need to look up quickly which are probably less than a few 100,000 records. We did evaluate partial indexes vs full indexes. The partial index speeds up our specific queries significantly while only taking a very small amount of space (often < 100MB, compared to a full index on the DB which is up around 20Gb at the moment). On 21 April 2017 at 03:01, Scott Marlowe <scott.marlowe@xxxxxxxxx> wrote: > On Thu, Apr 20, 2017 at 6:30 AM, Vick Khera <vivek@xxxxxxxxx> wrote: >> I'm curious why you have so many partial indexes. Are you trying to make >> custom indexes per query? It seems to me you might want to consider making >> the indexes general, and remove the redundant ones (that have the same >> prefix list of indexed fields). >> >> Secondly your table is 102Gb. Clearly there's a lot of data here. How many >> rows does that take? I would further suggest that you partition this table >> such that there are no more than about 10 million rows per partition (I've >> done this by using a id % 100 computation). Maybe in your case it makes >> sense to partition it based on the "what" field, because it appears you are >> trying to do that with your partial indexes already. > > I would think a two field index might be just as effective and not > require a lot of maintenance etc. > > -- > To understand recursion, one must first understand recursion. > > > -- > Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general