Folks,
Composite types look so seductive for mapping application-level non-scalar objects to columns:
o) Directly mappable to a client-side datatype via oid w/o any heavyweight O/R code.
o) Such mapping persists through any use of views, joins, etc from ad-hoc queries.
o) Using Domains as composite type members seem to work nicely, with the domain checks firing.
I can see a few drawbacks towards their use, however:
o) A bit of a pain to extend / manage. Cannot just add a column to the type. Gotta create a new type, write a procedure to crossover all instances in all tables from the old type to the new type, then, finally, drop the old type. Not insurmountable, but not nearly as easy as 'alter table add column ...; update table set newcolumn=value; alter table alter column newcolumn set not null;'
Hmm -- I see that if the composite type was created via a table definition, then you _can_ actually add and drop columns, and the tables using the composite type seem to get updated, as in http://archives.postgresql.org/pgsql-hackers/2005-03/msg00621.php . That's sweet, and if this is supportable functionality as opposed to an odd side-effect, then the 'ALTER TYPE' and 'CREATE TYPE' doc pages may well want to reference this feature.
o) Query syntax is a bit odd, but can still use contained fields in where clauses.
o) As currently implemented, constraints are not carried forth from type definition to column usage. This can be manually corrected through constraints on the tables which use 'em:
alter table testtable add check ((val).v2 in (1,2,3));
o) Convincing the system to create an index on a scalar member of a composite type is not quite intuitive. I was going to write that it was not possible, but it seems that you can do it as an expression:
--- create type testtype as ( v1 int, v2 int );
create table testtable ( id int not null primary key, val testtype not null );
create index t on testtable(((val).v2));
insert into testtable (id, val) values (1, (1,1)); insert into testtable (id, val) values (2, (1,2));
set enable_seqscan=f;
explain select * from testtable where (val).v2 = 2;
Index Scan using t on testtable (cost=0.00..3.35 rows=1 width=36) (actual time=0.370..0.375 rows=1 loops=1)
Index Cond: ((val).v2 = 2)
---
Heck, even multi-column indexes work:
test=# create index t2 on testtable((((val).v1)), ((val).v2)); CREATE INDEX test=# \d t2 Index "public.t2" Column | Type -----------------+--------- pg_expression_1 | integer pg_expression_2 | integer btree, for table "public.testtable"
test=# explain select * from testtable where (val).v1=1 and (val).v2=1; QUERY PLAN --------------------------------------------------------------------- Index Scan using t2 on testtable (cost=0.00..3.35 rows=1 width=36) Index Cond: (((val).v1 = 1) AND ((val).v2 = 1)) (2 rows)
----
Anyway, it seems that a good bit of work has happened in the past two release to bring composite types up-to-snuff, but are folks actually using 'em in production databases? Any known show-stopping drawbacks or creepy feelings about 'em? I think I remember seeing posts that they just aren't true members of the relational model, but, aside from constraint inheritance, what is known to be lacking, aside from the plethora of parenthesis they inflict?
---- James Robinson Socialserve.com
---------------------------(end of broadcast)--------------------------- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match