On 02/24/2016 09:41 AM, Tom Lane wrote:
However, it looks to me like row_to_json already does pretty much the right thing with nested array/record types: regression=# select row_to_json(row(1,array[2,3],'(0,1)'::int8_tbl,array[(1,2),(3,4)]::int8_tbl[])); row_to_json --------------------------------------------------------------------------------- {"f1":1,"f2":[2,3],"f3":{"q1":0,"q2":1},"f4":[{"q1":1,"q2":2},{"q1":3,"q2":4}]} (1 row) So the complaint here is that json_populate_record fails to be an inverse of row_to_json.
Right.
I'm not sure about Andrew's estimate that it'd be a large amount of work to fix this. It would definitely require some restructuring of the code to make populate_record_worker (or some portion thereof) recursive, and probably some entirely new code for array conversion; and making json_populate_recordset behave similarly might take refactoring too.
One possible shortcut if we were just handling arrays and not nested composites would be to mangle the json array to produce a Postgres array literal. But if we're handling nested composites as well that probably won't pass muster and we would need to decompose all the objects fully and reassemble them into Postgres objects. Maybe it won't take as long as I suspect. If anyone actually does it I'll be interested to find out how long it took them :-)
cheers andrew -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general