* Tao Lin <linbaiye at gmail.com> [2012 08 27, 17:54]: > There are issues with gluster on ext4,you have to use other file > systems(eg. xfs, ext3) instead of ext4. If you are referring to http://joejulian.name/blog/glusterfs-bit-by-ext4-structure-change/ I don't think I was experienceing that problem, since I shouldn't have an affected kernel version. Update: now I am able to issue the command, and indeed noticed that one of the nodes was offline. In the brick log I found: ----- cut here ----- patchset: git://git.gluster.com/glusterfs.git signal received: 7 time of crash: 2012-08-24 23:22:30 configuration details: argp 1 backtrace 1 dlfcn 1 fdatasync 1 libpthread 1 llistxattr 1 setfsid 1 spinlock 1 epoll.h 1 xattr.h 1 st_atim.tv_nsec 1 package-string: glusterfs 3.3.0 /lib/libc.so.6(+0x33af0)[0x7f024da35af0] /usr/lib/libglusterfs.so.0(__dentry_grep+0x8e)[0x7f024e7879de] /usr/lib/libglusterfs.so.0(inode_grep+0x66)[0x7f024e787c56] /usr/lib/glusterfs/3.3.0/xlator/protocol/server.so(resolve_entry_simple+0x91)[0x7f02491eb641] /usr/lib/glusterfs/3.3.0/xlator/protocol/server.so(server_resolve_entry+0x24)[0x7f02491ebd14] /usr/lib/glusterfs/3.3.0/xlator/protocol/server.so(server_resolve+0x98)[0x7f02491ebb88] /usr/lib/glusterfs/3.3.0/xlator/protocol/server.so(server_resolve_all+0x9e)[0x7f02491ebcbe] /usr/lib/glusterfs/3.3.0/xlator/protocol/server.so(resolve_and_resume+0x14)[0x7f02491ebd84] /usr/lib/glusterfs/3.3.0/xlator/protocol/server.so(server_lookup+0x18f)[0x7f024920525f] /usr/lib/libgfrpc.so.0(rpcsvc_handle_rpc_call+0x293)[0x7f024e550ce3] /usr/lib/libgfrpc.so.0(rpcsvc_notify+0x93)[0x7f024e550e53] /usr/lib/libgfrpc.so.0(rpc_transport_notify+0x28)[0x7f024e5518b8] /usr/lib/glusterfs/3.3.0/rpc-transport/socket.so(socket_event_poll_in+0x34)[0x7f024acd0734] /usr/lib/glusterfs/3.3.0/rpc-transport/socket.so(socket_event_handler+0xc7)[0x7f024acd0817] /usr/lib/libglusterfs.so.0(+0x3e394)[0x7f024e79b394] /usr/sbin/glusterfsd(main+0x58a)[0x407aaa] /lib/libc.so.6(__libc_start_main+0xfd)[0x7f024da20c4d] /usr/sbin/glusterfsd[0x404a59] ----- cut here ----- Regards