On Fri, Jul 22, 2011 at 12:15 PM, Karl Nack <karlnack@xxxxxxxxxxxxxxx> wrote: > I've been following a few blogs > (http://database-programmer.blogspot.com/, > http://thehelsinkideclaration.blogspot.com/) that make a very compelling > argument, in my opinion, to move as much business/transactional logic as > possible into the database, so that client applications become little > more than moving data into and out of the database using a well-defined > API, most commonly (but not necessarily) through the use of stored > procedures. There are costs and benefits associated with this. The major argument against is that if you have business processes that may frequently or quickly change at the requirement level, a heavier-weight process might not work so well. On the other hand if your database is being used by more than one application, or if you want it to be used by one or more application, then it ideally can provide a way to consistently enforce business logic and security across multiple applications. Another benefit is that depending on your architecture, you might be able to place a single API to call such procedures, generate parameterized query strings, and then pass those though, reducing the possibility of one type of SQL injection. Note however, you have to worry about other forms of SQL injection inside your stored procs so this isn't a magic bullet even if it helps. So I think it boils down to how much consistency and stability you want and where you want it. > Although there seems to be a lot of discussion out there of > the reasons why one might want to do this, I'm really at a loss for > finding good, concrete examples of how to do it. Consequently, I'm > hoping that somebody can share their experience(s), or point me to some > examples, of doing this with PostgreSQL. I'd consider myself fairly > well-versed in using the various features of PostgreSQL to enforce data > integrity, but much less so for implementing transactional logic. In LedgerSMB, we take this a step further by making the procedures into discoverable interfaces, so the application logic itself is a sort of thin glue between a UI layer and the database procedure layer. One thing I would suggest is to try to keep API calls as atomic as possible. You want to enforce consistency and so you need to have all relevant inputs passed to the function. See below for a suggested change to your API. > > To focus on a more concrete example, let's consider adding a financial > transaction to the database. The "traditional" way to do this, with the > business logic in the application layer, leaves us with two steps: > insert the transaction "header", then insert the line items: > > BEGIN; > > INSERT INTO transaction (id, date, description) > VALUES (1, CURRENT_DATE, 'Transaction 1'); > > INSERT INTO line_item (transaction_id, account_id, amount) > VALUES (1, 1, 50), (1, 2, -50); > > END; > > > Now if we start moving this logic to the database, we'd have something > like: > > BEGIN; > SELECT create_transaction(1, current_date, 'Transaction 1'); > SELECT create_line_item(1, 1, 50); > SELECT create_line_item(1, 1, -50); > END; Now, if you are doing double-entry bookkeeping this doesn't provide enough consistency, IMO. You can't check inside the function to ensure that the transaction is balanced. it would be better to: BEGIN; SELECT create_transaction(1, current_date, 'Transaction 1', '{{1, 1, 50},{1,1,-50}}'); COMMIT; Now for the application, you can create an API that is semantically clearer. But PostgreSQL doesn't provide an easy way of calling procedures of this sort out of select/update/insert statements and select is the only way to do this. Best Wishes, Chris Travers -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general