Sure
Start of upgrade: 15:36
Start of issue: 21:51On Mon, Aug 22, 2016 at 9:15 PM, Atin Mukherjee <amukherj@xxxxxxxxxx> wrote:
On Tue, Aug 23, 2016 at 4:17 AM, Steve Dainard <sdainard@xxxxxxxx> wrote:About 5 hours after upgrading gluster 3.7.6 -> 3.7.13 on Centos 7, one of my gluster servers disconnected its volume. The other two volumes this host serves were not affected.# gluster volume status storageStatus of volume: storageGluster process TCP Port RDMA Port Online Pid------------------------------------------------------------ ------------------ Brick 10.0.231.50:/mnt/raid6-storage/storage 49159 0 Y 30743 Brick 10.0.231.51:/mnt/raid6-storage/storage 49159 0 Y 676 Brick 10.0.231.52:/mnt/raid6-storage/storag e N/A N/A N N/ABrick 10.0.231.53:/mnt/raid6-storage/storage 49154 0 Y 10253 Brick 10.0.231.54:/mnt/raid6-storage/storage 49153 0 Y 2792 Brick 10.0.231.55:/mnt/raid6-storage/storage 49153 0 Y 13590 Brick 10.0.231.56:/mnt/raid6-storage/storage 49152 0 Y 9281 NFS Server on localhost 2049 0 Y 30775Quota Daemon on localhost N/A N/A Y 30781NFS Server on 10.0.231.54 2049 0 Y 2817Quota Daemon on 10.0.231.54 N/A N/A Y 2824NFS Server on 10.0.231.51 2049 0 Y 710Quota Daemon on 10.0.231.51 N/A N/A Y 719NFS Server on 10.0.231.52 2049 0 Y 9090Quota Daemon on 10.0.231.52 N/A N/A Y 9098NFS Server on 10.0.231.55 2049 0 Y 13611Quota Daemon on 10.0.231.55 N/A N/A Y 13619NFS Server on 10.0.231.56 2049 0 Y 9303Quota Daemon on 10.0.231.56 N/A N/A Y 9310NFS Server on 10.0.231.53 2049 0 Y 26304Quota Daemon on 10.0.231.53 N/A N/A Y 26320Task Status of Volume storage------------------------------------------------------------ ------------------ There are no active volume tasksI see lots of logs related to trashcan (failed [file exists]), set xattrs (failed [no such file or directory]), quota (invalid arguments) in the brick logs, which I enabled as a feature after the upgrade this morning.Could you let us know the time (in UTC) around which this issue was seen such that we can look at the logs around that time and see if something went wrong.______________________________After restarting glusterd on that host, the volume came back online.I've attached logs from that host if someone can take a look.# gluster volume info storageVolume Name: storageType: DistributeVolume ID: 6f95525a-94d7-4174-bac4-e1a18fe010a2 Status: StartedNumber of Bricks: 7Transport-type: tcpBricks:Brick1: 10.0.231.50:/mnt/raid6-storage/storage Brick2: 10.0.231.51:/mnt/raid6-storage/storage Brick3: 10.0.231.52:/mnt/raid6-storage/storage Brick4: 10.0.231.53:/mnt/raid6-storage/storage Brick5: 10.0.231.54:/mnt/raid6-storage/storage Brick6: 10.0.231.55:/mnt/raid6-storage/storage Brick7: 10.0.231.56:/mnt/raid6-storage/storage Options Reconfigured:nfs.disable: nofeatures.trash-max-filesize: 1GBfeatures.trash: onfeatures.quota-deem-statfs: onfeatures.inode-quota: onfeatures.quota: onperformance.readdir-ahead: on# rpm -qa |grep glusterfsglusterfs-fuse-3.7.13-1.el7.x86_64 glusterfs-cli-3.7.13-1.el7.x86_64 glusterfs-3.7.13-1.el7.x86_64glusterfs-server-3.7.13-1.el7.x86_64 glusterfs-api-3.7.13-1.el7.x86_64 glusterfs-libs-3.7.13-1.el7.x86_64 glusterfs-client-xlators-3.7.13-1.el7.x86_64 Thanks,Steve_________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
----Atin
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users