I was testing it on an SSD :D
Regards,
Strahinja
On Fri, Nov 22, 2013 at 10:31 PM, Robert Burgholzer <rburghol@xxxxxx> wrote:
By the way, my machine clearly sucks compared to yours! I was pretty stoked to get 8,000 in 7 seconds. :)On Fri, Nov 22, 2013 at 3:28 PM, Strahinja Kustudić <strahinjak@xxxxxxxxxxx> wrote:
So you got better insert performance by turning on synchronous_commit? How is that possible? Shouldn't synchronous_commit=off increase performance? Is this only the case with 8.3?I tried inserting 10k rows in a table with more than 50 columns with and without synchronous_commit and the results were (Postgres 9.1):off: 1.989son: 2.928sSo off is 2 times faster.Regards,Strahinja
On Thu, Nov 21, 2013 at 5:03 PM, Robert Burgholzer <rburghol@xxxxxx> wrote:Thanks for the response Simon. This is a perfect application of that function, I have a distributed environmental modeling system that generates Gigs and Gigs of time series data, most of which is "write-once read-seldom", and thus not worth the overhead of perpetual storage in the database, or stored in a remote modeling node (also not worth network or storage traffic for synching nodes). Similarly, since the tables all come from text files, there is virtually no penalty to accepting the risk of pg failure during table loading.Thanks again,/r/b----
Robert W. Burgholzer
'Making the simple complicated is commonplace; making the complicated simple, awesomely simple, that's creativity.' - Charles MingusAthletics: http://athleticalgorithm.wordpress.com/Science: http://robertwb.wordpress.com/