Hello,
I have a very simple setup.
Server:
http://theendofthetunnel.de/server.txt
Client:
http://theendofthetunnel.de/client.txt
GlusterFS current TLA
Linux 2.6.23-r6
Patched fuse from Gluster site
rsync 3.0.0_pre10 (tried various versions)
Only two boxes, one server, one client, GBit connection. Everything is
just about the backup volume. Both server and client mount it via fstab,
though the server doesn't do anything with the mount.
For three days I'm trying to rsync a bunch of smaller files (1.2TB) from
a remote location to the glusterfs mount on the client without success.
Rsync fails on random files on each run (never on the same files):
http://theendofthetunnel.de/rsync.txt
For me this looks like rsync writes a file, and after that it tries to
set permissions / owner / xattr / acl and the file it just wrote is
gone. And indeed the file it complains about is neither in the namespace
nor on any brick.
I tried the following:
- Strip ACLs and extended attributes
At least the "No such file or directory" messages related to those
are gone
- Disable write behind / read a head
Doesn't make any difference
- Upgraded from TLA 628 to current
Way less files seem to fail!
- Tried the same on completely different hardware
Same issue
The only other thing I noticed are those messages in the server log:
2008-02-20 02:18:18 E [unify.c:112:unify_buf_cbk] unify: samsung750
returned 2
2008-02-20 02:22:43 E [unify.c:112:unify_buf_cbk] unify: backup-ns
returned 2
2008-02-20 02:22:43 E [unify.c:112:unify_buf_cbk] unify: samsung750
returned 2
2008-02-20 02:27:55 E [unify.c:112:unify_buf_cbk] unify: backup-ns
returned 2
2008-02-20 02:27:55 E [unify.c:112:unify_buf_cbk] unify: wdext500 returned 2
2008-02-20 02:35:36 E [unify.c:112:unify_buf_cbk] unify: backup-ns
returned 2
2008-02-20 02:35:36 E [unify.c:112:unify_buf_cbk] unify: wdext500
(many of those)
I'm not sure if they are at all related to the rsync errors. The time
stamps of those do not really match the rsync messages.
If anyone could lend me a hand to debug this I'd really appreciate it.
--
Best regards,
Hannes Dorbath