On Wed, 2006-04-12 at 09:49, Rodrigo Sakai wrote: > Thanks for all help!! But my problem is with performance, I agree with all > of you, the RI must be maintained by the database, because a bunch of > reasons that everyone knows! > But, I'm dealing with a very huge database that servers more than 200 > clientes at the same time, and because of it, each manipulation (delete, > insert, update, select) on the database have a poor performance. So, if we > deal with RI in each client station, we take this work off the database! > The application is an ERP developed with DELPHI + (postgresql or oracle or > sql server)!! These are separate issues. One is performance of PostgreSQL handling FK->PK relationships. PostgreSQL, in my experience, is quite fast at this. However, there are ways you can set up FK->PK relationships that are non-optimal and will result in poor performance. FK->PK relationships are generally fastest when they are 1-1 and based on integer types. If there's a type mismatch, or you use slower types, like large text fields, or numerics, you may have poor performance. Give us a sample of your schema where you're having problems, let us help you troubleshoot your performance. High parallel load is another issue. No matter where you put your FK->PK relationship handling, having 200+ users connected at the same time and manipulating your database is a heavy load. Handling FK->PK relationships in software often is vulnerable to race conditions. Like so: (T1 and T2 are different "threads) T1: select id from mastertable where id=99; -- check for row T2: delete from mastertable where id=99; -- delete a row T1: insert into slavetable values (....); -- whoops! No master If we change the T1 to select for update, we now have the overhead that most FK->PK relationships have. What version of PostgreSQL are you running. Older versions had much poorer performance than newer versions when updating FK->PK relationships. Don't assume that application level FK->PK relationships will be faster AND as good as the ones at database level. It's quite possible that they're faster for you because you're cutting corners, referentially speaking, and your data will wind up incoherent over time. Also, you may be dealing with a database that is IO bound, and moving the FK checks to software is only a short stop gap, and as the machine hits the IO performance ceiling, you'll have the same problem again, need a bigger machine, and have incoherent data. I.e. the same problem, plus a few more, and have spent a lot of time spinning your wheels going a short distance.