Sameer Mahajan wrote: > [Sameer Mahajan] I will investigate how the unix domain sockets help in > my case. Why isn't it the default for postgres installations? Or it > isn't believed to be generic enough / straight forward enough to > configure? It is the default. Just connect without a host parameter (the default with most client library interfaces) to use a unix domain socket. >> trans_begin(connection_object) >> -> examines transaction state of connection object >> and if no active transaction issues BEGIN, otherwise >> generates and stores a savepoint name. >> >> etc. >> >> You'd just use a simple wrapper structure around the libpq connection >> object to track your state. >> > [Sameer Mahajan] all this is a little bit of baggage to carry around in > the application... I do agree that it'd be nice to have the ability to nest transactions or just have the DB translate to savepoint usage, just to simplify these cases. I don't think it's much baggage at all on the app side, though; I'd expect something like this to be trivial to implement in most apps. > [Sameer Mahajan] hmm.... I guess I have to stick to an earlier version > for some other reasons and it doesn't have it... "Some reason?" If you don't know why you're using an old version, and the new one does things you need... > [Sameer Mahajan] the most useful one that I found was DB_SET_RANGE which > internally can use the efficient btree indexing. Of course I can perform > a serial scan in my application to achieve the same result but it would > be a little in efficient. I don't speak Berkely DB, and I doubt all that many others do here either. How would you explain what that does, and why you use it, to somebody who doesn't use Berkeley DB? The documentation: http://www.oracle.com/technology/documentation/berkeley-db/db/api_c/dbc_get.html suggests that it's for a very different DB access approach where everything goes through a cursor. As far as I can tell the close equivalent for Pg would be something like: SELECT key, value FROM table WHERE key >= 'something' ORDER BY key LIMIT 1; (possibly used in a DECLARE ... CURSOR statement) Pg doesn't have cursors that can be scrolled using criteria like "the first record with field `x' having a value equal to or greater than `y'". It might be interesting - in the case of updatable cursors that operate directly on tables - to support "seeking" a cursor this way. I ported an app that used a very old shared-access ISAM table structure to a Pg backend some time ago, and I faced issues somewhat similar to yours. I ended up having to implement the required cursor behaviour at the application level, mapping it onto ordinary SQL operations. Pg didn't support scrollable read/write cursors at the time, but even if it had they wouldn't have done the job because it wasn't possible to "seek" the cursor based on WHERE-style matching criteria. In all honesty, it was pretty trivial to do. Given that the main use of all this is the fairly unusual use case of porting apps with "legacy" database access methods and APIs, I'm not sure it'd be worth putting much effort into. > [Sameer Mahajan] I guess I would buy this argument since the places I > need it justifies the amount of logic I can / need to add in the > application layer. My whole point was that you shouldn't *need* logic at the application layer. Many of the cases where your code currently tries to do something and handles an error if it fails, you should be able to do with SQL that'll do the right thing in the first place, without any special error handling at the application layer at all. -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general