On Fri, Apr 3, 2009 at 1:53 PM, David Kerr <dmk@xxxxxxxxxxxxxx> wrote: > Here is my transaction file: > \setrandom iid 1 50000 > BEGIN; > SELECT content FROM test WHERE item_id = :iid; > END; > > and then i executed: > pgbench -c 400 -t 50 -f trans.sql -l > > The results actually have surprised me, the database isn't really tuned > and i'm not working on great hardware. But still I'm getting: > > caling factor: 1 > number of clients: 400 > number of transactions per client: 50 > number of transactions actually processed: 20000/20000 > tps = 51.086001 (including connections establishing) > tps = 51.395364 (excluding connections establishing) Not bad. With an average record size of 1.2Meg you're reading ~60 Meg per second (plus overhead) off of your drive(s). > So the question is - Can anyone see a flaw in my test so far? > (considering that i'm just focused on the performance of pulling > the 1.2M record from the table) and if so any suggestions to further > nail it down? You can either get more memory (enough to hold your whole dataset in ram), get faster drives and aggregate them with RAID-10, or look into something like memcached servers, which can cache db queries for your app layer. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance