> - If something goes wrong during the execution of the script I can be > practically sure that the database wasn't 'corrupted'. >>From a database point of view it should be like the script never started at all. Not 100% completely. I'm not sure about innodb but in postgresql, a "serial" datatype (works the same as an autoincrement field in mysql) doesn't get rolled back if a transaction aborts. This is by design. > I'm quite positive about the above points, the main concern that I'm > having is about the possible locking issues that I might have when > multiple instances of the script are running simultaneously. > Could it be possible that a running script instance blocks the > database until the commit or something like that? If you're using innodb (or falcon) in mysql or postgresql, that's the point of them being transactional database engines. Each script instance will be completely independent of the others in that they won't see anything that happens in each others transactions. One big problem with having an extremely big transaction (depending on what you're doing of course) is that it may take a while to commit or rollback the whole thing. If you're processing 10,000 records - should be no problem at all. If you're processing a few million, it can become a problem. Mysql/innodb will also issue commits implicitly for some actions (eg 'alter table' or 'drop table' type commands). Postgresql supports these inside transactions. See http://dev.mysql.com/doc/refman/5.0/en/implicit-commit.html -- Postgresql & php tutorials http://www.designmagick.com/ -- PHP Database Mailing List (http://www.php.net/) To unsubscribe, visit: http://www.php.net/unsub.php