Search Postgresql Archives

Re: json datatype and table bloat?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



While I agree that an equality operator doesn't really make sense for json operationally, there are certain maintenance reasons why it may come in handy. Removing duplicate records comes to mind.

Other than adding basic stats to necessary columns, I would say that the equality operator is really one of the most basic tenets of a relational database and should probably exist for any data type - regardless of it's given usefullness.

Greg Haase

On Nov 4, 2013 6:31 PM, "ajelinek@xxxxxxxxx" <ajelinek@xxxxxxxxx> wrote:
>>> Along the lines of the equality operator; I have ran into issues trying
to
>>> pivot a table/result set with a json type due what seemed to be no
>>> equality
>>> operator.
>>
>> For the curious, and also use-case considerations for development, would
>> you
>> be able to share what it is you are doing (and how) that combines full
>> json
>> documents with pivoting?
>>
>> Compound types holding source data for a pivot seems problematic since
>> generally all the pivot components are single-valued and, for data, often
>> numerical.

>would also like to see this. json type has completely displaced
>crosstab in my usage. I don't typically pivot json though: I pivot the
>raw data then transform to json.  With limited exceptions I consider
>storing json in actual table rows to be an anti-pattern (but it should
>still work if you do it).

I could not figure out what I was doing last month to reproduce this.  So  I
did a small pivot poc, and it is erroring on the max function. So it is
probably not the same issue. My guess is I tried the using the GREATEST
function as a hail marry (which would not have worked) and got the following
message; ERROR:  could not identify a comparison function for type json and
then thought/hopped it was the same thing when reading the emails.

CREATE TABLE bad_table_json(id int, detail_type text, details json);
INSERT INTO bad_table_json values(1, 'a', '{"a":1}'::json);
INSERT INTO bad_table_json values(1, 'b', '{"a":1}'::json);
INSERT INTO bad_table_json values(1, 'c', '{"a":1}'::json);

SELECT id
      ,MAX(CASE WHEN detail_type = 'a' THEN details END) AS a
      ,MAX(CASE WHEN detail_type = 'b' THEN details END) AS b
      ,MAX(CASE WHEN detail_type = 'c' THEN details END) AS c
  FROM bad_table_json
 GROUP BY id



--
View this message in context: http://postgresql.1045698.n5.nabble.com/json-datatype-and-table-bloat-tp5776182p5776947.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux