On 02/23/2016 02:54 PM, Tom Lane wrote:
Dan S <strd911@xxxxxxxxx> writes:
I have this table, data and query:
create table test
(
id int,
txt text,
txt_arr text[],
f float
);
insert into test
values
(1,'jkl','{abc,def,fgh}',3.14159),(2,'hij','{abc,def,fgh}',3.14159),(2,null,null,null),(3,'def',null,0);
select j, json_populate_record(null::test, j)
from
(
select to_json(t) as j from test t
) r;
ERROR: malformed array literal: "["abc","def","fgh"]"
DETAIL: "[" must introduce explicitly-specified array dimensions.
Is it a bug or how am I supposed to use the populate function ?
AFAICS, json_populate_record has no intelligence about nested container
situations. It'll basically just push the JSON text representation of any
field of the top-level object at the input converter for the corresponding
composite-type column. That doesn't work if you're trying to convert a
JSON array to a Postgres array, and it wouldn't work for sub-object to
composite column either, because of syntax discrepancies.
Ideally this would work for arbitrarily-deeply-nested array+record
structures, but it looks like a less than trivial amount of work to make
that happen.
If I try an equivalent example with hstore it works well.
hstore hasn't got any concept of substructure in its field values, so
it's hard to see how you'd create an "equivalent" situation.
One problem with fixing this is avoiding backwards-compatibility breakage,
but I think we could do that by saying that we only change behavior when
(a) json sub-value is an array and target Postgres type is an array type,
or (b) json sub-value is an object and target Postgres type is a composite
type. In both cases, current code would fail outright, so there's no
existing use-cases to protect. For other target Postgres types, we'd
continue to do it as today, so for example conversion to a JSON column
type would continue to work as it does now.
I'm not sure if anything besides json[b]_populate_record needs to change
similarly, but we ought to look at all those conversion functions with
the thought of nested containers in mind.
regards, tom lane
PS: I'm not volunteering to do the work here, but it seems like a good
change to make.
Historically, we had row_to_json before we had json_populate_record, and
a complete round-trip wasn't part of the design anyway AFAIR. Handling
nested composites and arrays would be a fairly large piece of work, and
I'm not available to do it either.
A much simpler way to get some round-trip-ability would be to have a row
to json converter that would stringify instead of decomposing nested
complex objects, much as hstore does. That would be fairly simple to do,
and the results should be able to be fed straight back into
json(b)_populate_record. I'm not volunteering to do that either, but the
work involved would probably be measured in hours rather than days or
weeks. Of course, the json produced by this would be ugly and the
stringified complex objects would be opaque to other json processors.
OTOH, many round-trip applications don't need to process the serialized
object on the way around. So this wouldn't be a cure-all but it might
meet some needs.
Having json(b)_populate_record recursively process nested complex
objects would be a large undertaking. One thing to consider is that json
arrays are quite different from Postgres arrays: they are essentially
one-dimensional heterogenous lists, not multi-dimensional homogeneous
matrices. So while a Postgres array that's been converted to a json
array should in principle be convertible back, an arbitrary json array
could easily not be.
cheers
andrew
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general