Can you mail us the logs?
--
gowda
On Mon, Dec 8, 2008 at 3:07 PM, nicolas prochazka <prochazka.nicolas@xxxxxxxxx> wrote:
Hi,
It seems that glusterfs--mainline--3.0--patch-717 has a new problem, which not appear at least witch glusterfs--mainline--3.0--patch-710
Now i've :
ls: cannot open directory /mnt/vdisk/: Software caused connection abort
Regards,
Nicolas Prochazka.
my client spec file :
volume brick1
type protocol/client
option transport-type tcp/client # for TCP/IP transport
option remote-host 10.98.98.1 # IP address of server1
option remote-subvolume brick # name of the remote volume on server1
end-volume
volume brick2
type protocol/client
option transport-type tcp/client # for TCP/IP transport
option remote-host 10.98.98.2 # IP address of server2
option remote-subvolume brick # name of the remote volume on server2
end-volume
volume afr
type cluster/afr
subvolumes brick1 brick2
end-volume
volume iothreads
type performance/io-threads
option thread-count 4
option cache-size 32MB
subvolumes afr
end-volume
volume io-cache
type performance/io-cache
option cache-size 256MB # default is 32MB
option page-size 1MB #128KB is default option
option force-revalidate-timeout 2 # default is 1
subvolumes iothreads
end-volume
my server spec-file
volume brickless
type storage/posix
option directory /mnt/disks/export
end-volume
volume brick
type features/posix-locks
option mandatory on # enables mandatory locking on all files
subvolumes brickless
end-volume
volume server
type protocol/server
subvolumes brick
option transport-type tcp
option auth.addr.brick.allow 10.98.98.*
end-volume
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel
--
hard work often pays off after time, but laziness always pays off now