Just a quick update... I have been very busy with other things today and still have not tested everything but I figured I would share this. (My sites have been down today caused by an unexpected second power outage at the datacenter - so my above link was unavailable for a while) Again... Do NOT use this in a production environment... I have just installed using the deb on 2 nodes... and then '/etc/init.d/glusterd start' on both nodes. Working on node 1: # gluster peer probe node2 I then checked things were fine on both nodes: # dsh -a -M 'gluster peer status' | grep Uuid node1: Uuid: 103c42f2-6b8e-4b49-8a25-0c1e90033bcd node2: Uuid: 7d1f3b46-e3f4-4c79-a9e3-d6465cd5e313 (If you do not use dsh, you can run 'gluster peer status' on both nodes - I did this to ensure both nodes are using different Uuid's) Setup a distributed replicated volume with: # gluster volume create test-volume replica 2 transport tcp node1:/mnt/tmp1 node2:/mnt/tmp2 Start the volume: gluster volume start test-volume This time I used native mounting: mount -t glusterfs node1:/test-volume /mnt/data/mirror/ (a short and usual delay) Then I copied a file (4KB) into the mirror directory: cp /some/path/file.txt /mnt/data/mirror/ I checked the backend storage on both nodes had the file and both nodes reported: node1 # ls -al /mnt/tmp1 -rwxr--r-- 1 root root 3145 2010-11-24 07:50 file.txt node2 # ls -al /mnt/tmp2 -rwxr--r-- 1 root root 3145 2010-11-24 07:50 file.txt The frontend on node1 was also was correct: node1 # ls -al /mnt/data/mirror/ -rwxr--r-- 1 root root 3145 2010-11-24 07:50 file.txt Superb and as expected... (glusterfs 3.1.0 had very high cpu usage in my previous playtime - but this was quick and smooth) OK, now I got curious... so on node2: # mount -t glusterfs node2:/test-volume /mnt/data/mirror/ Copy a bigger file (20MB) into the mirror while on node2 # cp /path/to/20MBfile.txt /mnt/data/mirror/ Again, quick and smooth... thats not right is it? node1 # ls-al /mnt/data/mirror/ -rw------- 1 root root 20205782 2010-11-24 07:57 20MBfile.txt node2 # ls-al /mnt/data/mirror/ -rw------- 1 root root 20205782 2010-11-24 07:57 20MBfile.txt This is no way of testing for stability or should let anyone think that this is safe for production use. I next want to test multiple rsyncs on both nodes and give the whole lot a little bit of a sweat. But compared to the glusterfs 3.1.0 I toyed with, this did not seem to cause overheads I experienced and only testing 2 files I can not yet see if there are short reads or inconstant data. I will do a little more testing over the next few days. Martin On 23 November 2010 16:28, Deadpan110 <deadpan110 at gmail.com> wrote: > This is totally unsupported by the glusterfs devs so please read the > following with care! > > 1. I have never built a deb before and this is my 1st ever attempt > 2. The package may be missing a dependency or two > 3. dpkg-depcheck told me that libasound was needed (I am unsure why) > 4. I have not yet fully tested (read below) > 5. I built this package to make glusterfs easier to install and remove > on my playground of virtual nodes. > > WARNING and DISCLAIMER: > Do NOT use this in a production environment - even more so as it has > been packaged from a QA release. > I will NOT take any responsibility for data loss, system crashes, > earth quakes and acts of god - you download and use at your own risk. > The official gluster devs will not help in any way shape or form for > any i386 installation of glusterfs at this current time of writing - > so do not even ask! > > NOTES: > 1. Built on Ubuntu Lucid 10.04 i686 2.6.32-21-generic-pae #32-Ubuntu > 2. I have only tested on 1 node and used NFS mounting at this point in > time, gluster froze at about 7% done when I tried to copy a file > (74MB) into the NFS mount (I am unsure if the NFS nolock option could > have helped this). > 3. I have not yet tried native fuse mounting. > > Again - use at your own risk! > > http://indbl.gs/glusterfs311qa91i386deb > MD5: fcb8d7130390a51c334ade0aecdfffb6 > > Martin >