I wouldn't try putting cyrus's db's on gluster storage. I doubt the Berkely db's will even work and you can't have them open on more than one machine at a time so clustered storage doesn't make sense anyway. Try sharing the spool. Keep the db's out of the picture. If you actually need an imap cluster, use murder. John -- John madden Sr UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden at ivytech.edu On Dec 26, 2009, at 21:34, "David Touzeau" <david at touzeau.eu> wrote: > > > Dear > I'm using glusterfs 3.0, i'm trying to create a mirror on the cyrus- > imap > software that using 2 main directories /var/lib/cyrus > and /var/spool/cyrus/mail > currently there is no replication between servers. > I don't understand why ? > in debug mode, i receive many locking of / on child 0 failed: Function > not implemented > perhaps this is the main problem. > > > > here it is the client log > > [2009-12-27 03:25:41] D [client-protocol.c:7019:notify] brick- > cyrus-0-2: > got GF_EVENT_CHILD_UP > [2009-12-27 03:25:41] N [client-protocol.c:6224:client_setvolume_cbk] > brick-cyrus-0-2: Connected to 192.168.1.219:6996, attached to remote > volume 'brick-cyrus-0'. > [2009-12-27 03:25:41] N [afr.c:2625:notify] cluster-1: Subvolume > 'brick-cyrus-0-2' came back up; going online. > [2009-12-27 03:25:41] N [afr.c:2625:notify] distribute: Subvolume > 'cluster-1' came back up; going online. > [2009-12-27 03:25:41] N [client-protocol.c:6224:client_setvolume_cbk] > brick-cyrus-0-2: Connected to 192.168.1.219:6996, attached to remote > volume 'brick-cyrus-0'. > [2009-12-27 03:25:41] N [afr.c:2625:notify] cluster-1: Subvolume > 'brick-cyrus-0-2' came back up; going online. > [2009-12-27 03:25:41] N [afr.c:2625:notify] distribute: Subvolume > 'cluster-1' came back up; going online. > [2009-12-27 03:25:41] D [fuse-bridge.c:3079:fuse_thread_proc] fuse: > pthread_cond_timedout returned non zero value ret: 0 errno: 0 > [2009-12-27 03:25:41] N [fuse-bridge.c:2931:fuse_init] glusterfs-fuse: > FUSE inited with protocol versions: glusterfs 7.13 kernel 7.12 > [2009-12-27 03:25:41] D [client-protocol.c:7019:notify] brick- > cyrus-0-1: > got GF_EVENT_CHILD_UP > [2009-12-27 03:25:41] D [client-protocol.c:7019:notify] brick- > cyrus-0-1: > got GF_EVENT_CHILD_UP > [2009-12-27 03:25:41] D [client-protocol.c:7019:notify] brick- > cyrus-0-3: > got GF_EVENT_CHILD_UP > [2009-12-27 03:25:41] D [client-protocol.c:7019:notify] brick- > cyrus-0-3: > got GF_EVENT_CHILD_UP > [2009-12-27 03:25:41] N [client-protocol.c:6224:client_setvolume_cbk] > brick-cyrus-0-1: Connected to 192.168.1.239:6996, attached to remote > volume 'brick-cyrus-0'. > [2009-12-27 03:25:41] N [client-protocol.c:6224:client_setvolume_cbk] > brick-cyrus-0-1: Connected to 192.168.1.239:6996, attached to remote > volume 'brick-cyrus-0'. > [2009-12-27 03:25:41] N [client-protocol.c:6224:client_setvolume_cbk] > brick-cyrus-0-3: Connected to 192.168.1.238:6996, attached to remote > volume 'brick-cyrus-0'. > [2009-12-27 03:25:41] N [client-protocol.c:6224:client_setvolume_cbk] > brick-cyrus-0-3: Connected to 192.168.1.238:6996, attached to remote > volume 'brick-cyrus-0'. > [2009-12-27 03:25:41] D > [afr-self-heal-metadata.c:733:afr_sh_metadata_lk_cbk] cluster-1: > locking > of / on child 1 failed: Success > [2009-12-27 03:25:41] D > [afr-self-heal-metadata.c:733:afr_sh_metadata_lk_cbk] cluster-1: > locking > of / on child 0 failed: Function not implemented > [2009-12-27 03:25:41] D > [afr-self-heal-metadata.c:733:afr_sh_metadata_lk_cbk] cluster-1: > locking > of / on child 2 failed: Success > [2009-12-27 03:25:41] D > [afr-self-heal-metadata.c:733:afr_sh_metadata_lk_cbk] cluster-1: > locking > of / on child 0 failed: Function not implemented > [2009-12-27 03:30:05] D > [afr-self-heal-metadata.c:733:afr_sh_metadata_lk_cbk] cluster-1: > locking > of / on child 0 failed: Function not implemented > [2009-12-27 03:30:07] D > [afr-self-heal-metadata.c:733:afr_sh_metadata_lk_cbk] cluster-1: > locking > of / on child 0 failed: Function not implemented > [2009-12-27 03:30:08] D > [afr-self-heal-metadata.c:733:afr_sh_metadata_lk_cbk] cluster-1: > locking > of / on child 0 failed: Function not implemented > [2009-12-27 03:30:09] D > [afr-self-heal-metadata.c:733:afr_sh_metadata_lk_cbk] cluster-1: > locking > of / on child 0 failed: Function not implemented > [2009-12-27 03:30:10] D > [afr-self-heal-metadata.c:733:afr_sh_metadata_lk_cbk] cluster-1: > locking > of / on child 0 failed: Function not implemented > [2009-12-27 03:30:11] D > [afr-self-heal-metadata.c:733:afr_sh_metadata_lk_cbk] cluster-1: > locking > of / on child 0 failed: Function not implemented > [2009-12-27 03:30:12] D > [afr-self-heal-metadata.c:733:afr_sh_metadata_lk_cbk] cluster-1: > locking > of / on child 0 failed: Function not implemented > [2009-12-27 03:30:13] D > [afr-self-heal-metadata.c:733:afr_sh_metadata_lk_cbk] cluster-1: > locking > of / on child 0 failed: Function not implemented > [2009-12-27 03:30:15] D > [afr-self-heal-metadata.c:733:afr_sh_metadata_lk_cbk] cluster-1: > locking > of / on child 0 failed: Function not implemented > [2009-12-27 03:30:16] D > [afr-self-heal-metadata.c:733:afr_sh_metadata_lk_cbk] cluster-1: > locking > of / on child 0 failed: Function not implemented > > > The server configuration > > #---------------------- /var/lib/cyrus ---------------------- > volume posix-1 > type storage/posix > option directory /var/lib/cyrus > end-volume > > volume locks-1 > type features/locks > subvolumes posix-1 > end-volume > > volume brick-cyrus-0 > type performance/io-threads > option thread-count 8 > subvolumes locks-1 > end-volume > > #---------------------- /var/spool/cyrus/mail ---------------------- > > volume posix-2 > type storage/posix > option directory /var/spool/cyrus/mail > end-volume > > volume locks-2 > type features/locks > subvolumes posix-2 > end-volume > > volume brick-mail-1 > type performance/io-threads > option thread-count 8 > subvolumes locks-2 > end-volume > > volume server > type protocol/server > subvolumes brick-cyrus-0 brick-mail-1 > option transport-type tcp/server # For TCP/IP transport > option auth.ip.brick-cyrus-0.allow * # access to "brick-cyrus-0" > volume > option auth.ip.brick-mail-1.allow * # access to "brick-mail-1" > volume > end-volume > > > > this my client vol file connected to 3 servers > 192.168.1.239,192.168.1.219,192.168.1.238 for /var/lib/cyrus directory > --- > --- > --- > --- > --- > --- > ---------------------------------------------------------------------- > > #bricks on folder /var/lib/cyrus > volume brick-cyrus-0-1 > type protocol/client > option transport-type tcp/client > option remote-host 192.168.1.239 # IP of storage node > 192.168.1.239 > option remote-subvolume brick-cyrus-0 # /var/lib/cyrus > end-volume > > > volume brick-cyrus-0-2 > type protocol/client > option transport-type tcp/client > option remote-host 192.168.1.219 # IP of storage node > 192.168.1.219 > option remote-subvolume brick-cyrus-0 # /var/lib/cyrus > end-volume > > > volume brick-cyrus-0-3 > type protocol/client > option transport-type tcp/client > option remote-host 192.168.1.238 # IP of storage node > 192.168.1.238 > option remote-subvolume brick-cyrus-0 # /var/lib/cyrus > end-volume > > > > #----------------------------------------------------- > #bind bricks brick-cyrus-0-1 brick-cyrus-0-2 brick-cyrus-0-3 together > for folder /var/lib/cyrus > volume cluster-1 > type cluster/replicate > subvolumes brick-cyrus-0-1 brick-cyrus-0-2 brick-cyrus-0-3 > option replicate *:2 > end-volume > > > #Create a mirror of cluster-1 > volume distribute > type cluster/replicate > subvolumes cluster-1 > end-volume > > > volume writebehind > type performance/write-behind > option window-size 4MB > subvolumes distribute > end-volume > > > volume readahead > type performance/read-ahead > option page-count 4 > subvolumes writebehind > end-volume > > > volume iocache > type performance/io-cache > option cache-size 1GB > subvolumes readahead > end-volume > > > volume quickread > type performance/quick-read > option max-file-size 64kB > subvolumes iocache > end-volume > > > volume statprefetch > type performance/stat-prefetch > subvolumes quickread > end-volume > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users