Keith Freedman a ?crit : > all of a sudden, I'm getting messages such as this: > > 2009-03-13 23:14:06 C [posix.c:709:pl_forget] posix-locks-home1: > Pending fcntl locks found! > > and some processes are hanging waiting presumably for the locks? > any way to find out what files are being locked and unlock them. > restarting gluster doesn't seem to solve the problem. > Hi, I'm facing same problem with rc7, causing the server to use 100% of the CPU, and clients are unables to access files, waiting for something... I have to remove the glusterfs stack from our production environment, back to local hard drives... See attached CPU graphs of both servers. Config: # file: /etc/glusterfs/glusterfsd.vol # # Volumes # volume media-small type storage/posix option directory /var/local/glusterfs/media_small end-volume volume media-medium type storage/posix option directory /var/local/glusterfs/media_medium end-volume # Lock posix volume media-small-locks type features/posix-locks option mandatory-locks on subvolumes media-small # subvolumes trash # enable this if you need trash can support (NOTE: not present in 1.3.0-pre5+ releases) end-volume volume media-medium-locks type features/posix-locks option mandatory-locks on subvolumes media-medium # subvolumes trash # enable this if you need trash can support (NOTE: not present in 1.3.0-pre5+ releases) end-volume # # Performance # volume media-small-iot type performance/io-threads subvolumes media-small-locks option thread-count 4 # default value is 1 end-volume volume media-small-ioc type performance/io-cache option cache-size 128MB # default is 32MB option page-size 128KB # default is 128KB subvolumes media-small-iot end-volume volume media-small-wb type performance/write-behind #option flush-behind on # default is off subvolumes media-small-ioc end-volume volume media-small-ra type performance/read-ahead subvolumes media-small-wb option page-size 256KB # default is 256KB option page-count 4 # default is 2 - cache per file = (page-count x page-size) option force-atime-update no # defalut is 'no' end-volume volume media-medium-iot type performance/io-threads subvolumes media-medium-locks option thread-count 4 # default value is 1 end-volume volume media-medium-ioc type performance/io-cache option cache-size 128MB # default is 32MB option page-size 128KB # default is 128KB subvolumes media-medium-iot end-volume volume media-medium-wb type performance/write-behind #option flush-behind on # default is off subvolumes media-medium-ioc end-volume volume media-medium-ra type performance/read-ahead subvolumes media-medium-wb option page-size 256KB # default is 256KB option page-count 4 # default is 2 - cache per file = (page-count x page-size) option force-atime-update no # defalut is 'no' end-volume # # Serveur # volume server type protocol/server option transport-type tcp/server option auth.addr.media-small-ra.allow 10.0.*.* option auth.addr.media-medium-ra.allow 10.0.*.* # Autoconfiguration, e.g. : # glusterfs -l /tmp/glusterfs.log --server=filer-04 ./Cache option client-volume-filename /etc/glusterfs/glusterfs.vol subvolumes media-small-ra media-medium-ra # volumes export?s end-volume # file: /etc/glusterfs/glusterfs.vol # # Clients # volume media-small-filer-04 type protocol/client option transport-type tcp/client option remote-host filer-04.local option remote-subvolume media-small-ra end-volume volume media-small-filer-05 type protocol/client option transport-type tcp/client option remote-host filer-05.local option remote-subvolume media-small-ra end-volume volume media-medium-filer-04 type protocol/client option transport-type tcp/client option remote-host filer-04.local option remote-subvolume media-medium-ra end-volume volume media-medium-filer-05 type protocol/client option transport-type tcp/client option remote-host filer-05.local option remote-subvolume media-medium-ra end-volume # # Volume principal # volume afr-small # AFR has been renamed to "Replicate" for simplicity. type cluster/replicate # Il faut mettre le serveur avec le moins d'espace disque en 1er : # "When doing a "df -h" on a client, the AVAILABLE disk space will display the maximum disk space of the first AFR sub volume defined in the spec file. So if you have two servers with 50 gigs and 100 gigs of free disk space, and the server with 100 gigs is listed first, then you will see 100 gigs available even though one server only has 50 gigs free. " subvolumes media-small-filer-04 media-small-filer-05 end-volume volume afr-medium # AFR has been renamed to "Replicate" for simplicity. type cluster/replicate subvolumes media-medium-filer-04 media-medium-filer-05 end-volume # # Performance # volume iot-small type performance/io-threads option thread-count 8 # default is 1 subvolumes afr-small end-volume volume readahead-small type performance/read-ahead subvolumes iot-small option page-size 1MB # default is 256KB option page-count 4 # default is 2 - cache per file = (page-count x page-size) option force-atime-update no # defalut is 'no' end-volume volume iocache-small type performance/io-cache option cache-size 64MB # default is 32MB option page-size 256KB # default is 128KB subvolumes readahead-small end-volume volume wb-small type performance/write-behind option window-size 1MB # max 4MB option flush-behind on # default is off subvolumes iocache-small end-volume volume iot-medium type performance/io-threads option thread-count 8 # default is 1 subvolumes afr-medium end-volume volume readahead-medium type performance/read-ahead subvolumes iot-medium option page-size 1MB # default is 256KB option page-count 4 # default is 2 - cache per file = (page-count x page-size) option force-atime-update no # defalut is 'no' end-volume volume iocache-medium type performance/io-cache option cache-size 64MB # default is 32MB option page-size 256KB # default is 128KB subvolumes readahead-medium end-volume volume wb-medium type performance/write-behind option window-size 1MB # max 4MB option flush-behind on # default is off subvolumes iocache-medium end-volume Best regards, -- Greg -------------- next part -------------- A non-text attachment was scrubbed... Name: gluster_filer.png Type: image/png Size: 10124 bytes Desc: not available URL: <http://zresearch.com/pipermail/gluster-users/attachments/20090409/9c036c78/attachment-0001.png>