Hi, Thanks for everyone's response.
The issue is not just compression, but lack of "indexing" or "segmentation" when a
right now, if I query for one key, the whole doc
has to be first uncompressed and loaded and then search for the single key.
Compared to traditional way of storing each top level key with a separate column, this is huge overhead when table scan is required. Some kind of "keyed/slotted" storage for the doc could
help, (for illustration, all keys starting with 'A' would have its own storage unit, so on,
so when I search for key "A1" only that unit would be unpacked and traversed to get :"A1" value". it is like postgresql predfine 26 columns/slots for the whole doc. an internal indexing
so when I search for key "A1" only that unit would be unpacked and traversed to get :"A1" value". it is like postgresql predfine 26 columns/slots for the whole doc. an internal indexing
within each doc for fast retrieval of individual field values.
Someone mentioned a plan in roadmap for this route but I'd like to know if it is in 9.6 plan.
below url mentions the similar issue. I am not sure if it has been completely resolved.
http://stackoverflow.com/questions/26259374/jsonb-performance-degrades-as-number-of-keys-increase
below url mentions the potential issue.
https://www.reddit.com/r/PostgreSQL/comments/36rdlr/improving_select_performance_with_jsonb_vs_hstore/
below url mentions the similar issue. I am not sure if it has been completely resolved.
http://stackoverflow.com/questions/26259374/jsonb-performance-degrades-as-number-of-keys-increase
below url mentions the potential issue.
https://www.reddit.com/r/PostgreSQL/comments/36rdlr/improving_select_performance_with_jsonb_vs_hstore/
Thanks
On Sun, Nov 29, 2015 at 7:35 AM, Thomas Kellerer <spam_eater@xxxxxxx> wrote:
If you are concerned about the compression overhead, then why don't you use (or try) JSON instead?Tom Smith schrieb am 29.11.2015 um 03:27:
Hello:
Is there a plan for 9.6 to resolve the issue of very slow query/retrieval of jsonb fields
when there are large number (maybe several thousands) of top level keys.
Currently, if I save a large json document with top level keys of thousands and query/retrieve
field values, the whole document has to be first decompressed and load to memory
before searching for the specific field key/value.
Thanks in Advance
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general