Re: Problem with new glusterfs installation...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jonathan,
 it looks like the glusterfs client has exited or segfaulted. is it
possible for you to get a backtrace from the core? (if it is not
generating a core run 'ulimit -c unlimited' and then start glusterfs
with -N (non daemon mode) and re-do the steps to generate the error).
 that apart, please try the 1.3-pre4 release and see if you still get
the error. 1.2.3 is pretty old and a lot of things have happened
since.

thanks,
avati

2007/5/29, Jonathan Newman <jbnewm@xxxxxxxxx>:
Hey guys, I am relatively new to glusterfs and am having a bit of difficulty
getting a clustered fs up and running using it. Here are the details:
GlusterFS package: 1.2.3

3 servers total, 2 running glusterfsd and 1 as client to mount clustered fs.
The glusterfsd-server.vol on the two servers are identical and contain:
### File: /etc/glusterfs-server.vol - GlusterFS Server Volume Specification

### Export volume "brick" with the contents of "/data" directory.
volume brick
  type storage/posix                   # POSIX FS translator
  option directory /data               # Export this directory
end-volume

### Add network serving capability to above brick.
volume server
  type protocol/server
  option transport-type tcp/server     # For TCP/IP transport
  option client-volume-filename /etc/glusterfs/glusterfs-client.vol
  subvolumes brick
  option auth.ip.brick.allow 10.* # Allow access to "brick" volume
end-volume

The client file contains this:
### File: /etc/glusterfs/glusterfs-client.vol - GlusterFS Client Volume
Specification

### Add client feature and attach to remote subvolume of server1
volume client1
  type protocol/client
  option transport-type tcp/client     # for TCP/IP transport
  option remote-host 10.20.70.1        # IP address of the remote brick
  option remote-subvolume brick         # name of the remote volume
end-volume

### Add client feature and attach to remote subvolume of server2
volume client2
  type protocol/client
  option transport-type tcp/client     # for TCP/IP transport
  option remote-host 10.20.70.2        # IP address of the remote brick
  option remote-subvolume brick         # name of the remote volume
end-volume

### Add unify feature to cluster "server1" and "server2". Associate an
### appropriate scheduler that matches your I/O demand.
volume brick
  type cluster/unify
  subvolumes client1 client2
  ### ** Round Robin (RR) Scheduler **
  option scheduler rr
  option rr.limits.min-free-disk 4GB          # Units in KB, MB and GB are
allowed
  option rr.refresh-interval 10               # Check server brick after
10s.
end-volume

Server daemons on both servers are started using:
/usr/sbin/glusterfsd --log-file=/var/log/glusterfs/glusterfs.log

And then I mount the file system on the client using this command:
/usr/sbin/glusterfs -f
/etc/glusterfs/glusterfs-client.vol--log-file=/var/log/glusterfs/glusterfs.log
/mnt/test

All appears well and running mount on the client produces (among other
items):
glusterfs:17983 on /mnt/test type fuse (rw,allow_other,default_permissions)

However the logs on the servers show (both show same output in logs):
Tue May 29 11:56:29 2007 [DEBUG] tcp/server: Registering socket (4) for new
transport object of 10.20.30.1
Tue May 29 11:56:29 2007 [DEBUG] server-protocol: mop_setvolume: received
port = 1020
Tue May 29 11:56:29 2007 [DEBUG] server-protocol: mop_setvolume: IP addr =
10.*, received ip addr = 10.20.30.1
Tue May 29 11:56:29 2007 [DEBUG] server-protocol: mop_setvolume: accepted
client from 10.20.30.1
Tue May 29 11:56:29 2007 [DEBUG] libglusterfs: full_rw: 0 bytes r/w instead
of 113
Tue May 29 11:56:29 2007 [DEBUG] libglusterfs: full_rw:  Ñ÷·Ág, error string
'File exists'
Tue May 29 11:56:29 2007 [DEBUG] libglusterfs/protocol:
gf_block_unserialize_transport: full_read of header failed
Tue May 29 11:56:29 2007 [DEBUG] protocol/server: cleaned up xl_private of
0x8050178
Tue May 29 11:56:29 2007 [DEBUG] tcp/server: destroying transport object for
10.20.30.1:1020 (fd=4)

AND running any sort of file operation from within /mnt/test yields:
~ # cd /mnt/test; ls
ls: .: Transport endpoint is not connected

10.20.30.1 is the client and 10.20.70.[1,2] are the servers.

Anyone have any pointers that may lead me in the correct direction?

Thanks.

-Jon
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel



--
Anand V. Avati




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux