Howdy all! Just checking on whether this is the expected behaviour. I am transferring data from multiple databases to single one, and I want to ensure that I only have unique rows for some tables. Unfortunately, some of the rows have nulls for various columns, and I want to compare them for exact equality. => create table tmp ( bigint a, bigint b, primary key (a, b) ); To test for existence, I would naively use: => select count(1) from tmp where a = <value> and b = <value>; What I should use is: => select count(1) from tmp where ((a = <value>) or (a is null and <value> is null)) and ((b = <value>) or (b is null and <value> is null)); Looking in the manual, I see I can get what I want by running: => set transform_null_equals to on; And I can go back to using my naive script and everything works. However, as <values> are integers, I need to convert them to bigint's so that the index can be used (Postgresql 7.4.2 automatic casts, unless this has been fixed). So I wrote my script to do the following => select count(1) from tmp where a = <value>::bigint and b = <value>::bigint; And now the nulls don't match! As a further test, I did: => select null = null, null = null::bigint, null::bigint = null::bigint; ?column? | ?column? | ?column? ----------+----------+---------- t | t | (1 row) So, is there a way to do the casts such that this works? Other alternatives? I did a search but couldn't find an answer on the archives. Regards! Ed ---------------------------(end of broadcast)--------------------------- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match