Greg Smith wrote: > I'm not sure what is going on with your system, but the advice showing > up earlier in this thread is well worth heeding here: if you haven't > thoroughly proven that your disk setup works as expected on simple I/O > tests such as dd and bonnie++, you shouldn't be running pgbench yet. > It's not a tranparent benchmark unless you really understand what it's > doing, and you can waste endless time chasing phantom database setup > problems that way when you should be staring at hardware, driver, or OS > level ones instead. Do you know the disks are working as they should > here? Does the select-only pgbench give you reasonable results? Actually, this isn't so much a 'pgbench' exercise as it is a source of 'real-world application' data for my Linux I/O performance visualization tools. I've done 'iozone' tests, though not recently. But what I'm building is an I/O analysis toolset, not a database application. So I am "staring at hardware, driver or OS level" issues. :) To be more precise, I'm using block I/O layer tools, which are "beneath" the filesystem layer but "above" the driver and hardware levels. What you might find interesting is that, when I presented the earlier (iozone) test results at the Computer Measurement Group meeting in Las Vegas in December, there were two disk drive engineers in the audience, from, IIRC, Fujitsu. When they saw my results showing all four Linux schedulers yielding essentially the same performance metrics using some fairly tight statistical significance tests, they told me that it was because the drive was re-ordering operations according to its own internal scheduler! I haven't had a chance to investigate that in any detail yet, but I assume that they knew what they were talking about. The drive in question is an off-the-shelf unit that I got at CompUSA as part of a system that I had them build. In any event, it's *not* a "server-grade I/O subsystem", it's a single disk drive designed for "home desktop PC" use cases. In short, I don't expect server-grade TPS values. I did capture some 'iostat' data after I moved the 'pgbench' database back into the main partition where the rest PostgreSQL database lives. As I expected, the device and partition utilizations were in the high 90 percent range. I don't have the bandwidth figures from 'iostat' handy, but if the utilization is 98.4 percent, they may be the best I can get out of the drive with the xfs filesystem and the cfq scheduler. And the choice of scheduler might not matter. And the choice of filesystem might not matter. I may be getting all the drive can do. -- M. Edward (Ed) Borasky I've never met a happy clam. In fact, most of them were pretty steamed. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance