I'm using distribute configuration and here're configurations.
<client configuration>
volume client01
type protocol/client
option transport-type tcp
option remote-host 10.30.3.15
option remote-port 6996
option username hwuser
option password otepass
option remote-subvolume brick01
end-volume
volume client02
type protocol/client
option transport-type tcp
option remote-host 10.30.3.15
option remote-port 6996
option username hwuser
option password otepass
option remote-subvolume brick02
end-volume
volume client03
type protocol/client
option transport-type tcp
option remote-host 10.30.3.22
option remote-port 6996
option username hwuser
option password otepass
option remote-subvolume brick01
end-volume
volume client04
type protocol/client
option transport-type tcp
option remote-host 10.30.3.22
option remote-port 6996
option username hwuser
option password otepass
option remote-subvolume brick02
end-volume
volume client05
type protocol/client
option transport-type tcp
option remote-host 10.30.3.21
option remote-port 6996
option username hwuser
option password otepass
option remote-subvolume brick01
end-volume
volume client06
type protocol/client
option transport-type tcp
option remote-host 10.30.3.21
option remote-port 6996
option username hwuser
option password otepass
option remote-subvolume brick02
end-volume
volume distribute
type cluster/distribute
subvolumes client01 client02 client03 client04 client05 client06
end-volume
<one of server volumes and others are same>
volume posix01
type storage/posix
option directory /home/export
end-volume
volume posix02
type storage/posix
option directory /home2/export
end-volume
volume locks01
type features/locks
subvolumes posix01
end-volume
volume locks02
type features/locks
subvolumes posix02
end-volume
volume brick01
type performance/io-threads
option thread-count 8
subvolumes locks01
end-volume
volume brick02
type performance/io-threads
option thread-count 8
subvolumes locks02
end-volume
volume server
type protocol/server
subvolumes brick01 brick02
option transport-type tcp
option auth.login.brick01.allow hwuser
option auth.login.brick02.allow hwuser
option auth.login.hwuser.password otepass
#option auth.addr.brick01.allow *
#option auth.addr.brick02.allow *
end-volume
And I attach my test program.
Thanks
DongMin Yu
HOSTWAY IDC Corp. / R&D Principal Researcher
TEL. +822 2105 6037
FAX. +822 2105 6019
CELL. +8216 2086 1357
EMAIL: min.yu@xxxxxxxxxxxxxxx
Website: http://www.hostway.com
NOTICE: This email and any file transmitted are confidential and/or
legally privileged and intended only for the person(s) directly
addressed. If you are not the intended recipient, any use, copying,
transmission, distribution, or other forms of dissemination is strictly
prohibited. If you have received this email in error, please notify the
sender immediately and permanently delete the email and files, if any.
-----Original Message-----
From: Shehjar Tikoo [mailto:shehjart@xxxxxxxxxxx]
Sent: Friday, July 31, 2009 9:34 PM
To: Dongmin Yu
Cc: gluster-devel@xxxxxxxxxx
Subject: Re: weird response of gluster_readdir
Dongmin Yu wrote:
Hello,
I'm using glusterfs-2.0.4 and building a c-program with
libglusterfsclient.
I've created a directory, 'test' and wrote a file, 'hello.txt', to the
directory on glusterfs mounted volume.
Then I wanted to list all the files/sub-directories in the directory.
My code was as followings,
======
struct dirent *dirp = NULL;
glusterfs_dir_t dirfd = NULL;
char *path = "/gfs_mount/test/";
dirfd = glusterfs_opendir(path);
Before you can use libglusterfsclient API, you need to set up
a few things using the glusterfs_mount call. If that wasnt done,
glusterfs_opendir should have returned a NULL, thats the first
bug here I think. Can you confirm if you called glusterfs_mount
in the full program?
Have you looked at booster? It is a library that you LD_PRELOAD
under your regular applications so that file system access
happens over libglusterfsclient, without the need to adapt
apps to libglusterfsclient.
For more info, see
http://www.gluster.org/docs/index.php/BoosterConfiguration
-Shehjar
while( (dirp = glusterfs_readdir(dirfd) != NULL) ){
printf("## %s %d %d\n", dirp->d_name, dirp->d_type, dirp->d_reclen );
for( i = 0 ; i < 256; i++ ){
printf("%d ", dirp->d_name[i]);
}
printf("\n");}
glusterfs_closedir(dirfd);
=====
What I expected result was,
## Hello.txt 8 24
## . 4 16
## .. 4 16
But
## 0 74
0 0 0 0 0 0 0 0 46 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0
0 0 0 0
## 0 24578
0 0 0 0 0 0 0 0 46 46 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0
0 0 0 0 0
## | 135 48751
124 0 0 0 0 0 0 0 104 101 108 108 111 46 116 120 116 0 0 0 0 0 0 0 0 0
0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0
0 0 0 0 0 0 0 0 0 0 0 0 0
As you see, first 8 bytes has garbage data and d_type value is not
correct.
Is it a bug of readdir or am I mis-using the library?
Thanks
*DongMin Yu*
HOSTWAY IDC Corp. / R&D Principal Researcher
TEL. +822 2105 6037
FAX. +822 2105 6019
CELL. +8216 2086 1357
EMAIL: min.yu@xxxxxxxxxxxxxxx <mailto:min.yu@xxxxxxxxxxxxxxx>
Website: http://www.hostway.com
*NOTICE: This email and any file transmitted are confidential and/or
legally privileged and intended only for the person(s) directly
addressed. If you are not the intended recipient, any use, copying,
transmission, distribution, or other forms of dissemination is
strictly
prohibited. If you have received this email in error, please notify
the
sender immediately and permanently delete the email and files, if
any.*
------------------------------------------------------------------------
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel
------------------------------------------------------------------------
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel