For what it's worth here, after I added a hot tier to the pool, the brick sizes are now reporting the correct size of all bricks combined instead of just one brick.
Not sure if that gives you any clues for this... maybe adding another brick to the pool would have a similar effect?
On Thu, Dec 21, 2017 at 11:44 AM, Tom Fite <tomfite@xxxxxxxxx> wrote:
Sure!> 1 - output of gluster volume heal <volname> infoBrick pod-sjc1-gluster1:/data/brick1/gv0 Status: ConnectedNumber of entries: 0Brick pod-sjc1-gluster2:/data/brick1/gv0 Status: ConnectedNumber of entries: 0Brick pod-sjc1-gluster1:/data/brick2/gv0 Status: ConnectedNumber of entries: 0Brick pod-sjc1-gluster2:/data/brick2/gv0 Status: ConnectedNumber of entries: 0Brick pod-sjc1-gluster1:/data/brick3/gv0 Status: ConnectedNumber of entries: 0Brick pod-sjc1-gluster2:/data/brick3/gv0 Status: ConnectedNumber of entries: 0> 2 - /var/log/glusterfs - provide log file with mountpoint-volumename.logAttached
> 3 - output of gluster volume <volname> info[root@pod-sjc1-gluster2 ~]# gluster volume infoVolume Name: gv0Type: Distributed-ReplicateVolume ID: d490a9ec-f9c8-4f10-a7f3-e1b6d3ced196 Status: StartedSnapshot Count: 13Number of Bricks: 3 x 2 = 6Transport-type: tcpBricks:Brick1: pod-sjc1-gluster1:/data/brick1/gv0 Brick2: pod-sjc1-gluster2:/data/brick1/gv0 Brick3: pod-sjc1-gluster1:/data/brick2/gv0 Brick4: pod-sjc1-gluster2:/data/brick2/gv0 Brick5: pod-sjc1-gluster1:/data/brick3/gv0 Brick6: pod-sjc1-gluster2:/data/brick3/gv0 Options Reconfigured:performance.cache-refresh-timeout: 60 performance.stat-prefetch: onserver.allow-insecure: onperformance.flush-behind: onperformance.rda-cache-limit: 32MBnetwork.tcp-window-size: 1048576performance.nfs.io-threads: onperformance.write-behind-window-size: 4MB performance.nfs.write-behind-window-size: 512MB performance.io-cache: onperformance.quick-read: onfeatures.cache-invalidation: onfeatures.cache-invalidation-timeout: 600 performance.cache-invalidation: on performance.md-cache-timeout: 600network.inode-lru-limit: 90000performance.cache-size: 4GBserver.event-threads: 16client.event-threads: 16features.barrier: disabletransport.address-family: inetnfs.disable: onperformance.client-io-threads: oncluster.lookup-optimize: onserver.outstanding-rpc-limit: 1024auto-delete: enableYou have new mail in /var/spool/mail/root> 4 - output of gluster volume <volname> status[root@pod-sjc1-gluster2 ~]# gluster volume status gv0Status of volume: gv0Gluster process TCP Port RDMA Port Online Pid------------------------------------------------------------ ------------------ Brick pod-sjc1-gluster1:/data/brick1/gv0 49152 0 Y 3198Brick pod-sjc1-gluster2:/data/brick1/gv0 49152 0 Y 4018Brick pod-sjc1-gluster1:/data/brick2/gv0 49153 0 Y 3205Brick pod-sjc1-gluster2:/data/brick2/gv0 49153 0 Y 4029Brick pod-sjc1-gluster1:/data/brick3/gv0 49154 0 Y 3213Brick pod-sjc1-gluster2:/data/brick3/gv0 49154 0 Y 4036Self-heal Daemon on localhost N/A N/A Y 17869Self-heal Daemon on pod-sjc1-gluster1.exavault.com N/A N/A Y 3183Task Status of Volume gv0------------------------------------------------------------ ------------------ There are no active volume tasks> 5 - Also, could you try unmount the volume and mount it again and check the size?I have done this a few times but it doesn't seem to help.On Thu, Dec 21, 2017 at 11:18 AM, Ashish Pandey <aspandey@xxxxxxxxxx> wrote:
Could youplease provide following -
1 - output of gluster volume heal <volname> info
2 - /var/log/glusterfs - provide log file with mountpoint-volumename.log
3 - output of gluster volume <volname> info
4 - output of gluster volume <volname> status
5 - Also, could you try unmount the volume and mount it again and check the size?
From: "Teknologeek Teknologeek" <teknologeek06@xxxxxxxxx>
To: gluster-users@xxxxxxxxxxx
Sent: Wednesday, December 20, 2017 2:54:40 AM
Subject: Wrong volume size with df______________________________I need some help to understand what's going on as i can't delete the volume and recreate it from scratch.When browsing the data, they seem to be ok tho."gluster volume status detail" shows that all bricks are up and running with the right size, but when I use df from a client mount point, the size displayed is about 1/6 of the total size.I have a glusterfs setup with distributed disperse volumes 5 * ( 4 + 2 ).After a server crash, "gluster peer status" reports all peers as connected._________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users