Thanks.
I notice that the link you provided says:
"Per best practices, my postgres data directory, xlogs and WAL archives
are on different filesystems (ZFS of course). "
Why is this a best practice? Is there a reference for that?
Greg Smith wrote:
On Mon, 8 Sep 2008, William Garrison wrote:
2) We could install PostgreSQL onto the C: drive and then configure
the data folder to be on the SAN volume (Z:)
Do that. You really don't want to get into the situation where you
can't run anything related to the PostgreSQL service just because the
SAN isn't available. You may have internal SAN fans that will swear
that never happens, but it does. Also, it allows installing a later
PostgreSQL version upgrade on another system and testing against the
SAN data files in a way that said system could become the new server.
There's all kinds of systems management reasons you should separate
the database application from the database files.
So I am assured it is fast.
Compared to what? The same amount spent on direct storage would be
widly faster.
The thing to remember about SANs is that they are complicated, and
there are many ways you can misconfigure them so that their database
performance sucks. Make sure you actually benchmark the SAN and
compare it to direct connected disks to see if it's acting sanely;
don't just believe what people tell you.
I personally can't understand why anybody would spend SAN $ and then
hobble the whole thing by running PostgreSQL on Windows. The Win32
port is functional, but it's really not fast.
It is really nice because it supports instant snapshots so we can, in
theory, snapshot a volume and re-mount it elsewhere.
You'll still need to setup basic PITR recovery to know you got a
useful snapshot. See
http://lethargy.org/~jesus/archives/92-PostgreSQL-warm-standby-on-ZFS-crack.html
for a nice intro to that that uses ZFS as the snapshot implementation.
--
* Greg Smith gsmith@xxxxxxxxxxxxx http://www.gregsmith.com Baltimore, MD