Hi,
Could you upload quotad.log file from any one of the nodes in the cluster? The file is located under /var/run/glusterfs/.
-Krutika
From: "Geoffrey Letessier" <geoffrey.letessier@xxxxxxx>
To: gluster-users@xxxxxxxxxxx
Sent: Tuesday, September 16, 2014 3:24:48 PM
Subject: Quota issue on GlusterFS 3.5.2-1Dear All,We meet an issue with our storage infrastructure since I enabled quota service for our main storage volume.Indeed, since quota service is activated, when we try to write a new file on a defined quota path, we obtain this kind of message « Transport endpoint is not connected » -which disappears if we disable the quota service on the volume (or on the targeted subdirectory quota)I’m also noting no quota daemons seems to work on each bricks.Here is some information about my storage volumes: (i’ve highlighted the information which surprises me)[root@hades ~]# gluster volume status vol_homeStatus of volume: vol_homeGluster process Port Online Pid------------------------------------------------------------------------------Brick ib-storage1:/export/brick_home/brick1 49164 Y 7373Brick ib-storage2:/export/brick_home/brick1 49160 Y 6809Brick ib-storage3:/export/brick_home/brick1 49152 Y 3436Brick ib-storage4:/export/brick_home/brick1 49152 Y 3315Brick ib-storage1:/export/brick_home/brick2 49166 Y 7380Brick ib-storage2:/export/brick_home/brick2 49162 Y 6815Brick ib-storage3:/export/brick_home/brick2 49154 Y 3440Brick ib-storage4:/export/brick_home/brick2 49154 Y 3319Self-heal Daemon on localhost N/A Y 22095Quota Daemon on localhost N/A N N/ASelf-heal Daemon on ib-storage3 N/A Y 16370Quota Daemon on ib-storage3 N/A N N/ASelf-heal Daemon on 10.0.4.1 N/A Y 14686Quota Daemon on 10.0.4.1 N/A N N/ASelf-heal Daemon on ib-storage4 N/A Y 16172Quota Daemon on ib-storage4 N/A N N/ATask Status of Volume vol_home------------------------------------------------------------------------------There are no active volume tasks[root@hades ~]# gluster volume status vol_home detailStatus of volume: vol_home------------------------------------------------------------------------------Brick : Brick ib-storage1:/export/brick_home/brick1Port : 49164Online : YPid : 7373File System : xfsDevice : /dev/mapper/storage1--block1-st1--blk1--homeMount Options : rw,noatime,nodiratime,attr2,quotaInode Size : 256Disk Space Free : 6.9TBTotal Disk Space : 17.9TBInode Count : 3853515968Free Inodes : 3845133649------------------------------------------------------------------------------Brick : Brick ib-storage2:/export/brick_home/brick1Port : 49160Online : YPid : 6809File System : xfsDevice : /dev/mapper/storage2--block1-st2--blk1--homeMount Options : rw,noatime,nodiratime,attr2,quotaInode Size : 256Disk Space Free : 6.9TBTotal Disk Space : 17.9TBInode Count : 3853515968Free Inodes : 3845133649------------------------------------------------------------------------------Brick : Brick ib-storage3:/export/brick_home/brick1Port : 49152Online : YPid : 3436File System : xfsDevice : /dev/mapper/storage3--block1-st3--blk1--homeMount Options : rw,noatime,nodiratime,attr2,quotaInode Size : 256Disk Space Free : 7.4TBTotal Disk Space : 17.9TBInode Count : 3853515968Free Inodes : 3845131362------------------------------------------------------------------------------Brick : Brick ib-storage4:/export/brick_home/brick1Port : 49152Online : YPid : 3315File System : xfsDevice : /dev/mapper/storage4--block1-st4--blk1--homeMount Options : rw,noatime,nodiratime,attr2,quotaInode Size : 256Disk Space Free : 7.4TBTotal Disk Space : 17.9TBInode Count : 3853515968Free Inodes : 3845131363------------------------------------------------------------------------------Brick : Brick ib-storage1:/export/brick_home/brick2Port : 49166Online : YPid : 7380File System : xfsDevice : /dev/mapper/storage1--block2-st1--blk2--homeMount Options : rw,noatime,nodiratime,attr2,quotaInode Size : 256Disk Space Free : 6.8TBTotal Disk Space : 17.9TBInode Count : 3853515968Free Inodes : 3845128559------------------------------------------------------------------------------Brick : Brick ib-storage2:/export/brick_home/brick2Port : 49162Online : YPid : 6815File System : xfsDevice : /dev/mapper/storage2--block2-st2--blk2--homeMount Options : rw,noatime,nodiratime,attr2,quotaInode Size : 256Disk Space Free : 6.8TBTotal Disk Space : 17.9TBInode Count : 3853515968Free Inodes : 3845128559------------------------------------------------------------------------------Brick : Brick ib-storage3:/export/brick_home/brick2Port : 49154Online : YPid : 3440File System : xfsDevice : /dev/mapper/storage3--block2-st3--blk2--homeMount Options : rw,noatime,nodiratime,attr2,quotaInode Size : 256Disk Space Free : 7.0TBTotal Disk Space : 17.9TBInode Count : 3853515968Free Inodes : 3845124761------------------------------------------------------------------------------Brick : Brick ib-storage4:/export/brick_home/brick2Port : 49154Online : YPid : 3319File System : xfsDevice : /dev/mapper/storage4--block2-st4--blk2--homeMount Options : rw,noatime,nodiratime,attr2,quotaInode Size : 256Disk Space Free : 7.0TBTotal Disk Space : 17.9TBInode Count : 3853515968Free Inodes : 3845124761[root@hades ~]# gluster volume info vol_home
Volume Name: vol_homeType: Distributed-ReplicateVolume ID: f6ebcfc1-b735-4a0e-b1d7-47ed2d2e7af6Status: StartedNumber of Bricks: 4 x 2 = 8Transport-type: tcp,rdmaBricks:Brick1: ib-storage1:/export/brick_home/brick1Brick2: ib-storage2:/export/brick_home/brick1Brick3: ib-storage3:/export/brick_home/brick1Brick4: ib-storage4:/export/brick_home/brick1Brick5: ib-storage1:/export/brick_home/brick2Brick6: ib-storage2:/export/brick_home/brick2Brick7: ib-storage3:/export/brick_home/brick2Brick8: ib-storage4:/export/brick_home/brick2Options Reconfigured:diagnostics.brick-log-level: CRITICALauth.allow: localhost,127.0.0.1,10.*nfs.disable: onperformance.cache-size: 64MBperformance.write-behind-window-size: 1MBperformance.quick-read: onperformance.io-cache: onperformance.io-thread-count: 64features.quota: onAs you can read below, the CLI doesn’t show me the quota list (even after waiting a couple of hours) but i can get quota information specifying the quota path.[root@hades ~]# gluster volume quota vol_home listPath Hard-limit Soft-limit Used Available--------------------------------------------------------------------------------^C[root@hades ~]# gluster volume quota vol_home list /admin_teamPath Hard-limit Soft-limit Used Available--------------------------------------------------------------------------------/admin_team 1.0TB 80% 3.6GB 1020.4GBAnd additionally, i note a quota-crawl.log file is growing bigger…For information:- all storage node are running CentOS 6.5- previously on the same servers we were running a 3.3 GlusterFS version but, after having removing the old versions of GlusterFS packages and rebuilding all the bricks physically (RAID60->RAID6 and multiplying per 2 my storage nodes count and reimporting all my data inside this new volume), I installed 3.5.2 GlusterFS version. —Of course all storage node have been restarted several times since the upgrade-> but i note a troubling thing in gluster log file :[root@hades ~]# gluster --versionglusterfs 3.5.2 built on Jul 31 2014 18:47:54Repository revision: git://git.gluster.com/glusterfs.gitCopyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>GlusterFS comes with ABSOLUTELY NO WARRANTY.You may redistribute copies of GlusterFS under the terms of the GNU General Public License.[root@hades ~]# cat /var/log/glusterfs/home.log|grep "version numbers are not same"|tail -1l[2014-09-15 10:35:35.516925] I [client-handshake.c:1474:client_setvolume_cbk] 3-vol_home-client-7: Server and Client lk-version numbers are not same, reopening the fds[root@hades ~]# cat /var/log/glusterfs/home.log|grep "GlusterFS 3.3"|tail -1l[2014-09-15 10:35:35.516082] I [client-handshake.c:1677:select_server_supported_programs] 3-vol_home-client-7: Using Program GlusterFS 3.3, Num (1298437), Version (330)[root@hades ~]# rpm -qa gluster*glusterfs-fuse-3.5.2-1.el6.x86_64glusterfs-rdma-3.5.2-1.el6.x86_64glusterfs-3.5.2-1.el6.x86_64glusterfs-server-3.5.2-1.el6.x86_64glusterfs-libs-3.5.2-1.el6.x86_64glusterfs-cli-3.5.2-1.el6.x86_64glusterfs-api-3.5.2-1.el6.x86_64Can someone help me to fix the problem?Thanks in advance and have a nice day,GeoffreyPS: Don’t hesitate to tell me if you see something wrong (or better to do) in my volume settings.------------------------------------------------------Geoffrey Letessier
Responsable informatique
UPR 9080 - CNRS - Laboratoire de Biochimie Théorique
Institut de Biologie Physico-Chimique
13, rue Pierre et Marie Curie - 75005 Paris
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users