On Jun 01, armand pirvu modulated: > Overall could it be that the optimizer blatantly ignores a scan index which is cheaper than a table scan, or jsonb implementation still has a long way to come up or the way it is used in my case is not the one designed for ? > If I remember correctly, isn't a compound index always just using btree? In general, I have found better luck using several smaller btree indices than one large compound one. Unless your entire query can be answered from an index-only lookup, the extra columns just bloat the btree index. So, you might as well use a simpler compound index for the regular scalar row keys, and this index will be much smaller without the baggage of the jsonb values at its leaves. The planner can use the jsonb from the actual candidate rows if it is going to have to visit them anyway for other WHERE or SELECT clauses. If the sparseness of your query is due to the content within the jsonb values rather than the other scalar row keys, I think you'd need some kind of GIN index over the contents of the jsonb documents to find the small subset of candidate rows by these sparse criteria. Trigram is just one example of a GIN indexing scheme. If your jsonb documents are "flat", i.e. just a bag of key value pairs and not arbitrary nested jsonb structures, you might also explode them into arrays of keys or values as separate indexed expressions? Then, you could GIN index the arrays and quickly find the subset of rows with certain unusual keys or unusual values, but would still have to follow up with a more exact check for the combination of key and value. Karl -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general