When we did our initial proof of concept, we did not notice any performance problem of this magnitude. We were using OS release 2. Our QA engineers passed approval on the performance stats of the gfs2 filesystem and now that we are in deployment phase they are calling it unusable. Have there been any recent software changes that could have caused degraded performance or something I may have missed in configuration? Are there any tunable parameters in gfs2 that may increase our performance? Our application is very write intensive. Basically we are compiling a source tree and running a make clean between builds. Thanks in advance, Peter ~ On Wed, Jul 08, 2009 at 01:58:30PM -0700, Peter Schobel wrote: >> I am trying to set up a four node cluster but am getting very poor >> performance when removing large directories. A directory approximately >> 1.6G in size takes around 5 mins to remove from the gfs2 filesystem >> but removes in around 10 seconds from the local disk. >> >> I am using CentOS 5.3 with kernel 2.6.18-128.1.16.el5PAE. >> >> The filesystem was formatted in the following manner: mkfs.gfs2 -t >> wtl_build:dev_home00 -p lock_dlm -j 10 >> /dev/mapper/VolGroupGFS-LogVolDevHome00 and is being mounted with the >> following options: _netdev,noatime,defaults. > > This is something you have to live with. GFS(2) works great, but with > large(r) directories performance is extremely bad and for many > applications a real show-stopper. > > There have been many discussions on this list, with GFS parameter tuning > suggestions that at least for me didn't result in any improvements, with > promises that the problems would be solved in GFS2 (I see no significant > performance improvements between GFS and GFS2), etc. > -- > -- Jos Vos <jos@xxxxxx> > -- X/OS Experts in Open Systems BV | Phone: +31 20 6938364 > -- Amsterdam, The Netherlands | Fax: +31 20 6948204 -- Peter Schobel ~ -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster