BTW: This is the output of volume info and status.
u1@u1-virtual-machine:~$ sudo gluster volume info
Volume Name: mysqldata
Type: Replicate
Volume ID: 27e6161b-d2d0-4369-8ef0-acf18532af73
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.53.218:/data/gv0/brick1/mysqldata
Brick2: 192.168.53.221:/data/gv0/brick1/mysqldata
u1@u1-virtual-machine:~$ sudo gluster volume status
Status of volume: mysqldata
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick 192.168.53.218:/data/gv0/brick1/mysqldata 49154 Y 2071
Brick 192.168.53.221:/data/gv0/brick1/mysqldata 49153 Y 2170
NFS Server on localhost 2049 Y 2066
Self-heal Daemon on localhost N/A Y 2076
NFS Server on 192.168.53.221 2049 Y 2175
Self-heal Daemon on 192.168.53.221 N/A Y 2180
There are no active volume tasks
2014/1/18 Yandong Yao <yydzero@xxxxxxxxx>
Hi Guys,I am testing glusterfs and have configured replicated volume (replica=2 on two virtual machines), after play with the volume a while, there are un-consistent data reported by 'heal volname info':u1@u1-virtual-machine:~$ sudo gluster volume heal mysqldata infoGathering Heal info on volume mysqldata has been successfulBrick 192.168.53.218:/data/gv0/brick1/mysqldataNumber of entries: 1<gfid:0ff1a4e1-b14c-41d6-826b-e749a4e6ec7f>Brick 192.168.53.221:/data/gv0/brick1/mysqldataNumber of entries: 1/ibdata1
1) What does this means? Why one entry is file itself on one host, while another entry is gfid on another host?
2) After a while (maybe 2 minutes), re-run heal info, and get following output. What happened behind the scene? Why the entry changes to file from gfid?
u1@u1-virtual-machine:~$ sudo gluster volume heal mysqldata infoGathering Heal info on volume mysqldata has been successfulBrick 192.168.53.218:/data/gv0/brick1/mysqldataNumber of entries: 1/ibdata1Brick 192.168.53.221:/data/gv0/brick1/mysqldataNumber of entries: 1/ibdata1u1@u1-virtual-machine:~$ sudo gluster volume heal mysqldata info split-brainGathering Heal info on volume mysqldata has been successfulBrick 192.168.53.218:/data/gv0/brick1/mysqldataNumber of entries: 0Brick 192.168.53.221:/data/gv0/brick1/mysqldataNumber of entries: 03) I tried with both heal and heal full, while heal seems not work, I still get above output. How could I heal this case manually? Following is getfattr output.
u1@u1-virtual-machine:~$ sudo getfattr -e hex -m . -d /data/gv0/brick1/mysqldata/ibdata1getfattr: Removing leading '/' from absolute path names# file: data/gv0/brick1/mysqldata/ibdata1trusted.afr.mysqldata-client-0=0x000000010000000000000000trusted.afr.mysqldata-client-1=0x000000010000000000000000trusted.gfid=0x0ff1a4e1b14c41d6826be749a4e6ec7f
Any comments are welcome, and thanks very much in advance!Regards,Yandong
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users