On Tue, Jan 31, 2006 at 10:23:54PM +1100, James Harper wrote: > For the libpq interface: > > I need to be able to know if a column in a result from a query is > nullable or not. From reading the documentation it seems that I can > obtain the following information: > . scan all the rows in the result and see if there exists a null value > for each column... Be careful what you infer from such a scan: not finding any NULLs doesn't necessarily mean a column isn't nullable, it just means the result set didn't contain any NULLs. > . backtrack the column to the source table (assuming a non-calculated > field) and check the nullable status there > > Neither of the above is particularly cheap to do... If you know the table and column names then checking which columns have a NOT NULL constraint is a simple query against pg_attribute. > Which leads me to my next question... If I executed a select against a > table with a million rows, and the query returned all of the rows, what > happens? Are all the rows read into memory on the client before > returning the result? Or are rows only fetched from the server as > required? libpq fetches all rows before returning any to the client; if you want to fetch rows in smaller chunks then use a cursor. The developers' TODO list has an item to address that problem: * Allow statement results to be automatically batched to the client http://www.postgresql.org/docs/faqs.TODO.html -- Michael Fuhr