are they no longer syncing?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have a two node cluster setup with iscsi using the image files that are stored on the gluster cluster as LUNs. They do appear to be syncing, but I have a few questions and I appreciate any help you can give me. Thanks for your time!

1) Why does the second brick show as N for online?
2) Why is the healer daemon shown as NA? How can I correct that if it needs to be corrected? 3) Should i really be mounting the gluster volumes on each gluster node for iscsi access or should i be accessing /var/gluster-storage directly? 4) If i only have about 72GB of files stored in gluster, why is each gluster host about 155GB? Are their duplicates stored somewhere and why?

root@gluster1:~# gluster volume status volume1
Status of volume: volume1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster1:/var/gluster-storage 49152 0 Y 3043 Brick gluster2:/var/gluster-storage N/A N/A N N/A NFS Server on localhost 2049 0 Y 3026 Self-heal Daemon on localhost N/A N/A Y 3034 NFS Server on gluster2 2049 0 Y 2738 Self-heal Daemon on gluster2 N/A N/A Y 2743

Task Status of Volume volume1
------------------------------------------------------------------------------
There are no active volume tasks

root@gluster1:~# gluster peer status
Number of Peers: 1

Hostname: gluster2
Uuid: abe7ee21-bea9-424f-ac5c-694bdd989d6b
State: Peer in Cluster (Connected)
root@gluster1:~#
root@gluster1:~# mount | grep gluster
gluster1:/volume1 on /mnt/glusterfs type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)


root@gluster2:~# gluster volume status volume1
Status of volume: volume1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster1:/var/gluster-storage 49152 0 Y 3043 Brick gluster2:/var/gluster-storage N/A N/A N N/A NFS Server on localhost 2049 0 Y 2738 Self-heal Daemon on localhost N/A N/A Y 2743 NFS Server on gluster1.mgr.example.com 2049 0 Y 3026
Self-heal Daemon on gluster1.mgr.example.co
m N/A N/A Y 3034

Task Status of Volume volume1
------------------------------------------------------------------------------
There are no active volume tasks

root@gluster2:~# gluster peer status
Number of Peers: 1

Hostname: gluster1.mgr.example.com
Uuid: dff9118b-a2bd-4cd8-b562-0dfdbd2ea8a3
State: Peer in Cluster (Connected)
root@gluster2:~#
root@gluster2:~# mount | grep gluster
gluster1:/volume1 on /mnt/glusterfs type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
root@gluster2:~#
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux