In response to Jim Nasby <decibel@xxxxxxxxxxx>: > I was recently running defrag on my windows/parallels VM and noticed > a bunch of WAL files that defrag couldn't take care of, presumably > because the database was running. What's disturbing to me is that > these files all had ~2000 fragments. Now, this was an EnterpriseDB > database which means the WAL files were 64MB instead of 16MB, but > even having 500 fragments for a 16MB WAL file seems like it would > definitely impact performance. I don't know about that. I've seen marketing material that claims that modern NTFS doesn't suffer performance problems from fragmentation. I've never tested it myself, but my point is that you might want to do some experiments -- you might find out that it doesn't make any difference. If it does, you should be able to stop the DB, defragment the files, then start the DB back up. Since WAL files are recycled, they shouldn't fragment again -- unless I'm missing something. If that works, it may indicate that (on Windows) a good method for installing is to create all the necessary WAL files as empty files before launching the DB. > Can anyone else confirm this? I don't know if this is a windows-only > issue, but I don't know of a way to check fragmentation in unix. I can confirm that it's only a Windows problem. No UNIX filesystem that I'm aware of suffers from fragmentation. -- Bill Moran Collaborative Fusion Inc. http://people.collaborativefusion.com/~wmoran/ wmoran@xxxxxxxxxxxxxxxxxxxxxxx Phone: 412-422-3463x4023