Search Postgresql Archives

Re: Implementing "thick"/"fat" databases

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 27, 2011 at 1:41 AM, Chris Travers <chris.travers@xxxxxxxxx> wrote:
> On Tue, Jul 26, 2011 at 8:13 PM, Karl Nack <karlnack@xxxxxxxxxxxxxxx> wrote:
>>> The current svn trunk (to be 1.3) does.
>>
>> So how far do you take this? I've been playing around with plpgsql a bit
>> and am pretty encouraged by what I'm discovering; now I'm at a point
>> where I'm thinking, "how far do/can I go with this?"
>
> Here are the limitations I have discovered:
>
> 1)  Localization of exception strings is a bit of a problem.  Hence
> exceptions need to be aimed at communicating to the application rather
> than the user.
>
> 2)  Difficulties passing complex data structures back and forth and
> properly parsing it in the application.  Currently we do a lot with
> two dimensional arrays but will probably shift to more arrays of
> complex types as we drop support for older versions of PostgreSQL and
> DBD::Pg.
>
> There are queries which do a lot of things in the db in a single SQL
> statement.  The longest single SQL statement I have found thus far is
> a bit over 100 lines long (due to complex requirements and some
> shortcomings in the db schema we have inherited that we are working on
> replacing).  It's still pretty easy to read and understand at that
> length, at least when compared to a function in a more general purpose
> language.
>
> Menu data is also stored in the database (application settings and
> menu argument data are the two areas where key/value modelling is
> used).
>
> The result is that the Perl codebase is shrinking in absolute terms,
> being replaced in part by SQL.  However, a rigorous separation of
> named query and lightweight application logic has allowed us to shrink
> the amount of code total in the project while significantly adding
> functionality.
>
>>
>> Probably the best example is input validation. Constraints and triggers
>> on the database will (or at least should) prevent bad data from being
>> added to the database, but the UI generally needs to provide more
>> informative messages than errors thrown by the database, and provide
>> errors messages for every invalid field, whereas the database will
>> fail/stop on the first error. Consequently, I find that much of the data
>> logic ends up being duplicated outside of the database to enhance the
>> user experience. Might there be a way to move these validation routines
>> into the database as well, and unify all the data logic into one place?
>
> The best option is to use exceptions to communicate to the application
> what went wrong and then allow the application to handle those
> exceptions in many cases.  In other cases, the application may need to
> know which inputs are mandatory.
>
> In general what we do is side with the exception trapping and
> handling.  This means that if the query fails, we take the sql state,
> detect the type of error, and display an appropriate message.  In some
> cases ("Access denied") we are terse.  In other cases we are adding
> the full SQL error message to the message simply because the
> combination of an easy to read description of what happened "Required
> input not provided" and the sql message mentioning the field is enough
> for many users to figure out what they did wrong,  It's still not
> idea.
>
>>
>>> > Yes, but I'd implement the constraint "all transactions must balance" as
>>> > a trigger that fires when the transaction is complete. This would
>>> > enforce data integrity regardless of whether or not the database API is
>>> > used, which I think is also important.
>>>
>>> That's problematic to do in PostgreSQL because statement-level
>>> triggers don't have access to statement args, and I don't believe they
>>> can be deferred.
>>
>> In another email I did provide a simple example of how I might implement
>> this; I don't know if you saw it. Of course, that assumes your database
>> isn't designed by .... :-)
>
> I didn't see it even looking back (I saw the initial attempt and the
> PHP sample).
>
> The ideal interface at the moment is something like
>
> SELECT create_transaction(1, current_date, 'Transaction 1', '{(1, 50),
> (2, -50)}');
>
> This would allow you do do something like:
> SELECT create_transaction(1, current_date, 'Transaction 1', '{(1, 50),
> (2, -30), (3, -20)}'); as well since we are now talking about arrays of records.
>
> But this is a pain to code to/from SQL in a robust way.  Good db
> drivers sometimes handle this automatically though.

Note, we wrote libpqtypes (http://libpqtypes.esilo.com/) precisely to
deal with this problem -- first class handling of arrays and
composites in the client.  It's not much help for a perl client, but I
think similar methodologies can be made for most languages. Sending
rich data structures directly to procedures in the database transforms
the way the application/database communications work for the better.
It's new and weird to many developers, especially those trained on ORM
usage patterns, but is also entirely effective.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux