Bill Moran <wmoran@xxxxxxxxxxxxxxxxx> writes: > Tom Smith <tomsmith1989sk@xxxxxxxxx> wrote: >> Is there a plan for 9.6 to resolve the issue of very slow >> query/retrieval of jsonb fields when there are large number (maybe >> several thousands) of top level keys. Currently, if I save a large >> json document with top level keys of thousands and query/retrieve field >> values, the whole document has to be first decompressed and load to >> memory before searching for the specific field key/value. > I could be off-base here, but have you tried: > ATLER TABLE $table ALTER COLUMN $json_column SET STORAGE EXTERNAL; There is just about zero chance we'll ever worry about this for compressed columns. However, even the uncompressed case does currently involve loading the whole column value, as Tom says. We did consider the possibility of such an optimization when designing the JSONB storage format, but I don't know of anyone actively working on it. In any case, it's unlikely that it'd ever be super fast, since it's certainly going to involve at least a couple of TOAST fetches. Personally I'd be looking for a different representation. If there are specific fields that are known to be needed a lot, maybe functional indexes would help? regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general