Neal Clark wrote: > my $sth = $dbh->prepare(qq{SOME_QUERY}); > $sth->execute; > while (my $href = $sth->fetchrow_hashref) { > # do stuff > } > [...] > > So with mysql, I can just say $dbh->{'mysql-use-result'} = 1, and > then it switches so that the fetchrow_hashref calls are actually > fetching from the database, and I only have one row in memory at a > time, unless I copy the reference and let it wander off somewhere > else, or something. > > So all I'm really asking is, how does postgre approach the use result/ > store result issue? Can I easily process result sets that are larger > than memory? And if it handles it similar to mysql, does it also > cause the same table locking behaviour? The man page of DBD::Pg says, and my experiments with tcpdump confirm: RowCacheSize (integer) Implemented by DBI, not used by this driver. And more: Cursors Although PostgreSQL has a cursor concept, it has not been used in the current implementation. Cursors in PostgreSQL can only be used inside a transaction block. Because only one transaction block at a time is allowed, this would have implied the restriction not to use any nested "SELECT" statements. Hence the "execute" method fetches all data at once into data structures located in the front-end application. This approach must to be considered when selecting large amounts of data! So there is no automatic way of handling it. You will probably have to consider it in your code and use SELECT-Statements with a LIMIT clause. Yours, Laurenz Albe