On Thu, Mar 13, 2008 at 4:53 PM, justin <justin@xxxxxxxxxxxxxxx> wrote: > > I'm ran pgbench from my laptop to the new server > > My laptop is dual core with 2 gigs of ram and 1 gig enthernet connection to > server. so i don't think the network is going to be a problem in the test. > > When i look at the server memory its only consuming 463 megs. I have the > effective cache set at 12 gigs and sharebuffer at 100megs and work mem set > to 50megs You do know that effective_cache_size is the size of the OS level cache. i.e. it won't show up in postgresql's memory usage. On a machine with (I assume) 12 or more gigs or memory, you should have your shared_buffers set to a much higher number than 100Meg. (unless you're still running 7.4 but that's another story.) pgbench will never use 50 megs of work_mem, as it's transactional and hitting single rows at a time, not sorting huge lists of rows. Having PostgreSQL use up all the memory is NOT necessarily your best bet. Letting the OS cache your data is quite likely a good choice here, so I'd keep your shared_buffers in the 500M to 2G range. > transaction type: TPC-B (sort of) > scaling factor: 100 > number of clients: 1 > > number of transactions per client: 10 > number of transactions actually processed: 10/10 > tps = 20.618557 (including connections establishing) > tps = 20.618557 (excluding connections establishing) > > > transaction type: TPC-B (sort of) > scaling factor: 100 > > number of clients: 10 > number of transactions per client: 10 > number of transactions actually processed: 100/100 > tps = 18.231541 (including connections establishing) > tps = 18.231541 (excluding connections establishing) > > > transaction type: TPC-B (sort of) > scaling factor: 100 > > number of clients: 10 > number of transactions per client: 100 > number of transactions actually processed: 1000/1000 > tps = 19.116073 (including connections establishing) > tps = 19.116073 (excluding connections establishing) > > > transaction type: TPC-B (sort of) > scaling factor: 100 > > number of clients: 40 > number of transactions per client: 1000 > number of transactions actually processed: 40000/40000 > tps = 20.368217 (including connections establishing) > tps = 20.368217 (excluding connections establishing) Those numbers are abysmal. I had a P-III-750 5 years ago that ran well into the hundreds on a large scaling factor (1000 or so) pgbench db with 100 or more concurrent connections all the way down to 10 threads. I.e. it never dropped below 200 or so during the testing. this was with a Perc3 series LSI controller with LSI firmware and the megaraid 2.0.x driver, which I believe is the basis for the current LSI drivers today. A few points. 10 or 100 total transactions is far too few transactions to really get a good number. 1000 is about the minimum to run to get a good average, and running 10000 or so is about the minimum I shoot for. So your later tests are likely to be less noisy. They're all way too slow for a modern server, and point ot non-optimal hardware. An untuned pgsql database should be able to get to or over 100 tps. I had a sparc-20 that could do 80 or so. Do you know if you're I/O bound or CPU bound? -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance