Hi Avati, Thanks for the v.prompt response; I've not tried on an alt. / clean system, I've only got sane access to the cluster I'm trying it on... To uninstall, I simply used "make uninstall" in the 1.2.3 build dir, but looking in /usr/lib/glusterfs: 1.3.0-pre3 scheduler transport xlator scheduler, transport & xlatar are all empty dirs, or contain empty dirs. I've removed them from each machine for tidyness. Also had old libs loitering in /usr/local/lib/glusterfs - removed these... got suspicious and trawled my entire HDs for gluster. Steam-cleaned anything left after "make uninstall". gluster--mainline--2.4 checked to new work area... "make install" ran on each machine (gluster src mounted over NFS). Spec file - do you mean the .vol? I've attached them both (client & server). Machines are: n1, n3, n5: glusterfs daemon, local directory hosted n7: localdir and ext. usb disc hosted (alt. vol file) n2, n4, n6, n8: no drives hosted; these will mirror n1,3,5,7 using the replication setup once the basics are stable. Then n1-8 are all clients, along with "head1". Everything's running RHEL (uh) 4, example uname from n1: Linux n1 2.6.9-42.ELsmp #1 SMP Wed Jul 12 23:32:02 EDT 2006 x86_64 x86_64 x86_64 GNU/Linux n1-8 all dual CPU, dual core => 4 opteron cores. head1 is a Sun x4600, so 4 CPUs, all dual core (=> 8 cores) Compiled glusterfs *without* a "--disable-fuse-client" or "--disable-server", as 95% of machines will be both servers and clients, so a single compilation is then installed (via NFS share) to all machines. Servers launched with: LD_LIBRARY_PATH=/usr/local/lib:/usr/lib glusterfsd -f /etc/glusterfs/glusterfs-server.vol (path req'd to pick up fuse etc) USB hosting servers launched with: LD_LIBRARY_PATH=/usr/local/lib:/usr/lib glusterfsd -f /etc/glusterfs/glusterfs-server-usb.vol Clients launched with: modprobe fuse LD_LIBRARY_PATH=/usr/local/lib:/usr/lib glusterfs -f /etc/glusterfs/glusterfs-client.vol /media/glusterfs I think that's all the spec... :o) Now, with this install... Glusterfs has packed up on me! For instance, I mount glusterfs on /mount/glusterfs. ls -l /media/glusterfs total 24 drwxr-xr-x 3 root root 4096 May 30 14:57 . drwxr-xr-x 6 root root 4096 Jun 25 14:26 .. drwxr-xr-x 4 root root 4096 Jun 11 13:46 home Looks ok, but: ls -l /media/glusterfs/home total 0 Ah. If I "cd /media/glusterfs/home", I can then cd into userdirs (e.g. "guest"), and they work fine. "ls" doesn't work though... I've disabled writebehind, readahead and statprefetch to see if anything was interfering... No change. Status attached as per vol files (statprefetch disabled); also attached log from client-debug... Help! :o) Ian ________________________________ From: anand.avati@xxxxxxxxx [mailto:anand.avati@xxxxxxxxx] On Behalf Of Anand Avati Sent: 25 June 2007 13:01 To: Ian Grimstead Cc: gluster-devel@xxxxxxxxxx Subject: Re: GlusterFS core dump (v1.2.3) Can you please attach the spec file? probably some stale binaries have got left over from the previous build.. what were the steps you did to uninstall glusterfs? does it work on a differnt system which had no traces of 1.2.3 ? thanks, avati 2007/6/25, Ian Grimstead <I.J.Grimstead@xxxxxxxxxxxxxxxx>: I think something's up - I have just uninstalled 1.2.3 on each node, then obtained & built glusterfs--mainline--2.4; looks like you need to run autogen.sh to produce the "configure" script? Did that, installed... now I'm not getting any directories under my glusterfs mount, apart from a single (and empty) "/home". So I have: /media/glusterfs => gluster FS mount point ls -al /media/glusterfs gives: total 24 drwxr-xr-x 3 root root 4096 May 30 14:57 . drwxr-xr-x 3 root root 4096 May 11 17:32 .. drwxr-xr-x 4 root root 4096 Jun 11 13:46 home But, ls -al /media/glusterfs/home gives: total 0 Rather worrying. However, if I "cd" to a known directory, it works. This dir is also empty. If I "cat" a known file, it's there! Wierd. Log from client just shows "stat-prefetch:flush on: /" 3 times after initial handshake success of servers... I've turned off prefetch caching on the client - no difference. :o( Any idea what could have caused this? Ian -- Anand V. Avati