Hello all, Just wanted to share some results from some very basic benchmarking runs comparing three disk configurations on the same hardware: http://morefoo.com/bench.html Before I launch into any questions about the results (I don't see anything particularly shocking here), I'll describe the hardware and configurations in use here. Hardware: *Tyan B7016 mainboard w/onboard LSI SAS controller *2x4 core xeon E5506 (2.13GHz) *64GB ECC RAM (8GBx8 ECC, 1033MHz) *2x250GB Seagate SATA 7200.9 (ST3250824AS) drives (yes, old and slow) *2x160GB Intel 320 SSD drives Software: *FreeBSD 8.2 STABLE snapshot from 6/2011 (includes zfsv28, this is our production snapshot) *PostgreSQL 9.0.6 (also what we run in production) *pgbench-tools 0.5 (to automate the test runs and make nice graphs) I was mainly looking to compare three variations of drive combinations and verify that we don't see any glaring performance issues with Postgres running on ZFS. We mostly run 1U boxes and we're looking for ways to get better performance without having to invest in some monster box that can hold a few dozen cheap SATA drives. SSDs or SATA with SSDs hosting the "ZIL" (ZFS Intent Log). The ZIL is a bit of a cheat, as it allows you to throw all the synchronous writes to the SSD - I was particularly curious about how this would benchmark even though we will not likely use ZIL in production (at least not on this db box). background thread: http://archives.postgresql.org/pgsql-performance/2011-10/msg00137.php So the three sets of results I've linked are all pgbench-tools runs of the "tpc-b" benchmark. One using the two SATA drives in a ZFS mirror, one with the same two drives in a ZFS mirror with two of the Intel 320s as ZIL for that pool, and one with just two Intel 320s in a ZFS mirror. Note that I also included graphs in the pgbench results of some basic system metrics. That's from a few simple scripts that collect some vmstat, iostat and "zpool iostat" info during the runs at 1 sample/second. They are a bit ugly, but give a good enough visual representation of how swamped the drives are during the course of the tests. Why ZFS? Well, we adopted it pretty early for other tasks and it makes a number of tasks easy. It's been stable for us for the most part and our latest wave of boxes all use cheap SATA disks, which gives us two things - a ton of cheap space (in 1U) for snapshots and all the other space-consuming toys ZFS gives us, and on this cheaper disk type, a guarantee that we're not dealing with silent data corruption (these are probably the normal fanboy talking points). ZFS snapshots are also a big time-saver when benchmarking. For our own application testing I load the data once, shut down postgres, snapshot pgsql + the app homedir and start postgres. After each run that changes on-disk data, I simply rollback the snapshot. I don't have any real questions for the list, but I'd love to get some feedback, especially on the ZIL results. The ZIL results interest me because I have not settled on what sort of box we'll be using as a replication slave for this one - I was going to either go the somewhat risky route of another all-SSD box or looking at just how cheap I can go with lots of 2.5" SAS drives in a 2U. I'm hoping that the general "call for discussion" is an acceptable request for this list, which seems to cater more often to very specific tuning questions. If not, let me know. If you have any test requests that can be quickly run on the above hardware, let me know. I'll have the box easily accessible for the next few days at least (and I wouldn't mind pushing more writes through to two of my four ssds before deploying the whole mess in case it is true that SSDs fail a the same write cycle count). I'll be doing more tests for my own curiousity such as making sure UFS2 doesn't wildly outperform ZFS on the SSD-only setup, testing with the expected final config of 4 Intel 320s, and then lots of application-specific tests, and finally digging a bit more thoroughly into Greg's book to make sure I squeeze all I can out of this thing. Thanks, Charles -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance