<wespvp@syntegra.com> writes: > On 5/9/04 9:32 AM, "Tom Lane" <tgl@sss.pgh.pa.us> wrote: >> Are you sure it is a network problem? > Yes, it is definitely due to the network latency even though that latency is > very small. Here it is running locally: > [ about 20000 records/sec ] Okay, I just wanted to verify that we weren't overlooking any other sorts of bottleneck. But the numbers you quote make sense as a network issue: 33 seconds for 10000 records is 3.03 msec per record, and since you say the measured ping time is 3 msec, it appears that FETCH has just about the same response time as a ping ;-). So you can't really complain about it. The only way to do better will be to batch multiple fetches into one network round trip. > A Pro*C program I recently ported from Oracle to PostgreSQL showed this > difference. In Pro*C you can load an array with rows to insert, then issue > a single INSERT request passing it the array. As far as I can tell, in > PostgreSQL ecpg (or other) you have to execute one request per record. The usual way to batch multiple insertions is with COPY IN. The usual way to batch a fetch is just to SELECT the whole thing; or if that is too much data to snarf at once, use a cursor with "FETCH n" requests. I am not sure how either of these techniques map into ecpg though. If you want to use ecpg then I'd suggest bringing up the question on pgsql-interfaces --- the ecpg gurus are more likely to be paying attention over there. > ... It appears that COPY works like this, but you can't > control what is returned and you have to know the column order. True, COPY OUT is only designed to return all the rows of a table. However, in recent versions you can specify what columns you want in a COPY. It's still no substitute for SELECT... regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html