"Merlin Moncure" <merlin.moncure@xxxxxxxxxxxxx> writes: > ok, I generated a test case which was 250k inserts to simple two column > table all in single transaction. Every 50k inserts, time is recorded > via timeofday(). You mean something like the attached? > Running from remote, Time progression is: > First 50k: 20 sec > Second : 29 sec > [...] > final: : 66 sec On Unix I get a dead flat line (within measurement noise), both local loopback and across my LAN. after 50000 30.20 sec after 100000 31.67 sec after 150000 30.98 sec after 200000 29.64 sec after 250000 29.83 sec "top" shows nearly constant CPU usage over the run, too. With a local connection it's pretty well pegged, with LAN connection the server's about 20% idle and the client about 90% (client machine is much faster than server which may affect this, but I'm too lazy to try it in the other direction). I think it's highly likely that you are looking at some strange behavior of the Windows TCP stack. regards, tom lane
Attachment:
binWU60h309Ky.bin
Description: timeit.c