I have a relation such as
create table order_item ( id uuid not null primary key, order_id number not null, item_code text, make text, model text, reason text, size text, expiration_date timestamp );
where the combination of the columns order_id, item_code, make, model, reason, size must be unique (unless there's an expiration date).
I'm inclined to use a unique index:
create unique index unique_index_order_item_1 on order_item (order_id, item_code, make, model, reason, size)
where expiration_date is null;
this works as expected and the duplicate row is rejected
:
insert into order_item ( 1, 88, 'CODE', 'MAKE','MODEL','REASON', 'SIZE', null); <- first adds
insert into order_item ( 1, 88, 'CODE', 'MAKE','MODEL','REASON', 'SIZE', null); <- rejects
however, nulls are allowed for all the columns except the order_id. so, when I add a null value, it fails to meet my expectations,
insert into order_item ( 1, 88, 'CODE', 'MAKE',null,'REASON', 'SIZE', null); <- first adds
insert into order_item ( 1, 88, 'CODE', 'MAKE',null,'REASON', 'SIZE', null); <- adds, but should reject.
This leads me to think I need to create 2^5 or 32 unique constraints to handle the various combinations of data that I can store. Until now, this integrity is handled in the application code. That breaks when the application is multi-threaded and the rules are not applied at the database level.
Another solution I can think of is to just use a trigger to prevent the duplicate rows.
Any thoughts are certainly appreciated. I can't do much about the data model itself right now, I need to protect the integrity of the data.
Thanks!
-mark-