I tend to exclude reason 1: I've dumped the whole array using a debugger and it really contains what I return when looping through it.
As far as I can see it's rather reason 2: execute_for_fetch seems to fill the array incorrectly, that is: it's a valid array, but the last value added to it also seems to overwrite previously added values.
I seem to have found a workaround but my perl knowledge is too limited to evaluate if it's a good one.
When I replace
my $fetch_tuple_sub = sub { $sel->fetchrow_arrayref };
by
my $fetch_tuple_sub = sub {
my $ary_ref = $sel->fetchrow_arrayref;
print "my method: ".$dbh_pg->errstr."\n" if $dbh_pg->err;
return $ary_ref;
};
then the expected exception messages get printed.
Is this a acceptable way to do it in your opinion?
>>> Richard Huxton <dev@xxxxxxxxxxxx> 2007-06-06 16:04 >>> Bart Degryse wrote: > Using DBI->err was a leftover from earlier testing. $dbh_pg->err is of course better. But it doesn't solve the problem. > > I'm not sure what you mean with your second remark. > The call to my function ( SELECT dbi_insert3(); ) is one transaction I suppose. > According to the documentation on execute_for_fetch (http://search.cpan.org/~timb/DBI-1.48/DBI.pm#execute_for_fetch) however > an execute is done for every fetched record and @tuple_status should contain the error message associated with each failed execute. I was wondering if there was a hidden BEGIN...COMMIT sneaking into the process somewhere - either from execute_for_fetch() or in the context of using DBI from within plperl. Reading back through, you say that the "good" rows get inserted, so that can't be the case. The only other reasons that spring to mind are: 1. A bug in your looping through tuple-status 2. A bug in execute_for_fetch() filling the tuple-status array. What happens if you elog the whole array (just to get the ref numbers) - that should show whether DBI is filling the array incorrectly. -- Richard Huxton Archonet Ltd |