Bill Moran wrote:
In response to Jim Nasby <decibel@xxxxxxxxxxx>:
I was recently running defrag on my windows/parallels VM and noticed
a bunch of WAL files that defrag couldn't take care of, presumably
because the database was running. What's disturbing to me is that
these files all had ~2000 fragments. Now, this was an EnterpriseDB
database which means the WAL files were 64MB instead of 16MB, but
even having 500 fragments for a 16MB WAL file seems like it would
definitely impact performance.
I don't know about that. I've seen marketing material that claims that
modern NTFS doesn't suffer performance problems from fragmentation. I've
never tested it myself, but my point is that you might want to do some
experiments -- you might find out that it doesn't make any difference.
If it does, you should be able to stop the DB, defragment the files, then
start the DB back up. Since WAL files are recycled, they shouldn't
fragment again -- unless I'm missing something.
If that works, it may indicate that (on Windows) a good method for installing
is to create all the necessary WAL files as empty files before launching
the DB.
If that turns out to be a problem, I wonder if it would help to expand
the WAL file to full size with ftruncate or something similar, instead
of growing it page by page.
Can anyone else confirm this? I don't know if this is a windows-only
issue, but I don't know of a way to check fragmentation in unix.
I can confirm that it's only a Windows problem. No UNIX filesystem
that I'm aware of suffers from fragmentation.
What do you mean by suffering? All filesystems fragment files at some
point. When and how differs from filesystem to filesystem. And some
filesystems might be smarter than others in placing the fragments.
There's a tool for Linux in the e2fsprogs package called filefrag that
shows the fragmentation of a file, but I've never used it myself.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com