On Sun, Jan 25, 2009 at 1:14 AM, Thomas Finneid <tfinneid@xxxxxxx> wrote: > Scott Marlowe wrote: >> >> I wrote a >> simple test case for this and on a table with 100,000 entries already >> in it, then inserting 10,000 in a transaction and 10,000 outside of a >> transaction, I get insert rates of 0.1 ms and 0.5 ms respectively. >> With a table with 1,000,000 rows already in place, the insert times >> with all the fields in an index was 1.5ms and 4.3ms respectively. >> >> With only i1, i3, i4, val1 in the index, the numbers for a table with >> 100,000 entries to start with was 0.1ms and 0.5 ms, just like the >> above with the larger index. With a 1,000,000 initial table, inserts >> take 2.1 and 3.0 ms respectively. > > How do you run that setup, because those times are amazing, my inserts take > about 220ms, constantly from the first row in the table to the 1.2 billionth > row. The client I was using for the inserts is a bare-bone use case > simulation tool I have written in java, to test different insert and query > strategies for this application. Its using JDBC copy to do the inserts. The setup was a simple PHP script. I've attached it to this email. > There is one thing you should know though, and that is that the real table I > am using has 20 value fields where the 6 first fields contains a value, but > that does not affect the difference int eh the execution time of the two > indexes. Yes, but it will increase the insert time to the table depending very much on the size of those other fields. >> So I don't think you've found the cause of your problem with the smaller >> index. > > I dont quite understand what you are saying here, but I assume you are > saying that the smaller index is not the cause of the increased insert time? Yes, that's what I was saying. > If so, I did the test with both indexes on exactly the same db and setup. > And when the index uses all four ids the insert time is larger than if I > only use id1,3,4. I thought it was the other way around for you, that the smaller index was slower. > What concerns me about your test, is that you dont seem to get constant > insert times, so there is a difference between the two tests, which miuch be > why you dont see the problem I am seeing with my index. Yes, you need to look at things like increasing the number of wal segments and checkpointing. If a checkpoint is kicking in it's going to slow everything down. What version pgsql are you running? My tests were on 8.3.x on a core2duo laptop with a stock slow 80Gig hard drive, but most likely it's lying about fsync, so that could explain some of the numbers. I just ran it on my real server, since it's late at night, there's not much else going on. With 1M rows created ahead of time I got similar numbers: 0.12 ms per insert with all 10,000 inserted in a transaction 0.24 ms per insert with each insert being individual transactions (i.e. no begin; commt; wrapped around them all) This is on a machine with a 12 disk RAID-10 array under an Areca 1680ix controller with 512M battery backed cache. Note that the table had no other columns in it like yours does.
Attachment:
test
Description: Binary data
-- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance