I'm seeing what seems like slow retrieval times over the network. I am retrieving a single field of about 100-120 characters per record. I am getting about 3 seconds per 1000 records - it takes 30 seconds to retrieve 10,000 records. That's only about 36 KBytes/sec. This is a 100BT switched network (not sure if it is vlan'd or through a router). Echo time averages 3ms. The back end is pretty much idle. It shows 'idle in transaction'. 05-08-2004.23:54:43 Records read: 10000 05-08-2004.23:55:17 Records read: 20000 05-08-2004.23:55:50 Records read: 30000 05-08-2004.23:56:22 Records read: 40000 05-08-2004.23:56:55 Records read: 50000 05-08-2004.23:57:32 Records read: 60000 05-08-2004.23:58:07 Records read: 70000 ... The code is an ecpg program like: EXEC SQL WHENEVER SQLERROR GOTO sql_error; EXEC SQL WHENEVER NOT FOUND DO break; EXEC SQL DECLARE message_cursor CURSOR FOR SELECT file_name FROM messages WHERE system_key=(select system_key from systems where system_name=:systemName); EXEC SQL OPEN message_cursor; count = 0; while (1) { EXEC SQL FETCH message_cursor INTO :fileNameDB; memcpy (tempstr, fileNameDB.arr, fileNameDB.len); tempstr[fileNameDB.len] = '\0'; [Action with tempstr removed for testing] count++; if ( (count % 10000) == 0) logmsg ("Records read: %d", count); } How can I speed this thing up? Wes ---------------------------(end of broadcast)--------------------------- TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org