"Jim C. Nasby" <jnasby@xxxxxxxxxxxxx> writes: > Haven't seen this in previous discussions of OID wrap in the archives. > The issue is that DDL statements don't make any attempt to skip past > used ranges of OIDs. > duplicate key violates unique constraint "pg_attrdef_oid_index" If you can make that happen in 8.1, I'd be fascinated to look at the test case. 2005-08-11 21:35 tgl * doc/src/sgml/ddl.sgml, src/backend/access/heap/heapam.c, src/backend/access/heap/tuptoaster.c, src/backend/access/transam/varsup.c, src/backend/catalog/catalog.c, src/backend/catalog/heap.c, src/backend/catalog/index.c, src/backend/catalog/pg_type.c, src/backend/commands/dbcommands.c, src/backend/commands/trigger.c, src/backend/commands/typecmds.c, src/backend/storage/large_object/inv_api.c, src/backend/utils/cache/relcache.c, src/bin/pg_dump/pg_backup_archiver.c, src/bin/pg_dump/pg_dump.c, src/include/access/transam.h, src/include/catalog/catalog.h, src/include/catalog/pg_type.h, src/include/utils/rel.h, src/include/utils/relcache.h: Solve the problem of OID collisions by probing for duplicate OIDs whenever we generate a new OID. This prevents occasional duplicate-OID errors that can otherwise occur once the OID counter has wrapped around. Duplicate relfilenode values are also checked for when creating new physical files. Per my recent proposal. regards, tom lane