Re: Unable to stop volume because geo-replication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ping,

That's good to here. Let us know if you face any issues further.
We are happy to help you.

Thanks and Regards,
Kotresh H R

----- Original Message -----
> From: "Chao-Ping Chien" <cchien@xxxxxxxxx>
> To: "Kotresh Hiremath Ravishankar" <khiremat@xxxxxxxxxx>
> Cc: gluster-users@xxxxxxxxxxx
> Sent: Wednesday, November 16, 2016 7:32:11 PM
> Subject: RE:  Unable to stop volume because geo-replication
> 
> Hi Kotresh,
> 
> Thank you very much for taking time to help.
> 
> I follow your instruction, restart glusterd with log level at DEBUG. I think
> the restart somehow fix the state. The geo-replication statue correctly
> report all the volumes status (unlink before when the time I report the
> problem, it only show some part of setting)
> 
> I was able to stop the geo-replication and delete the geo-replication and
> eventually delete the volume.
> 
> If you wish I can send you the log. I am not attach the log this time because
> the problem seems to because the environment is not in normal state. And the
> restart fix the problem.
> 
> Thanks.
> 
> Ping.
> 
> -----Original Message-----
> From: Kotresh Hiremath Ravishankar [mailto:khiremat@xxxxxxxxxx]
> Sent: Tuesday, November 15, 2016 12:51 AM
> To: Chao-Ping Chien <cchien@xxxxxxxxx>
> Cc: gluster-users@xxxxxxxxxxx
> Subject: Re:  Unable to stop volume because geo-replication
> 
> Hi,
> 
> Could you please restart glusterd in DEBUG mode and share the glusterd logs?
> 
> *Starting glusterd in DEBUG mode as follows.
> 
>     #glusterd -LDEBUG
> 
> *Stop the volume
>    #gluster vol stop <volume-name>
> 
> Share the glusterd logs.
> 
> Thanks and Regards,
> Kotresh H R
> 
> ----- Original Message -----
> > From: "Chao-Ping Chien" <cchien@xxxxxxxxx>
> > To: gluster-users@xxxxxxxxxxx
> > Sent: Monday, November 14, 2016 10:18:16 PM
> > Subject:  Unable to stop volume because geo-replication
> > 
> > 
> > 
> > Hi,
> > 
> > 
> > 
> > Hope someone can point me how to do this.
> > 
> > 
> > 
> > I want to delete a volume but not able to do so because glusterfs is
> > keep reporting there is geo-replication setup which seems to be not
> > exist at the moment when I issue stop command.
> > 
> > 
> > 
> > On a Redhat 7.2 kernel: 3.10.0-327.36.3.el7.x86_64
> > 
> > [root@eqappsrvp01 mule1]# rpm -qa |grep gluster
> > 
> > glusterfs-3.7.14-1.el7.x86_64
> > 
> > glusterfs-fuse-3.7.14-1.el7.x86_64
> > 
> > glusterfs-server-3.7.14-1.el7.x86_64
> > 
> > glusterfs-libs-3.7.14-1.el7.x86_64
> > 
> > glusterfs-api-3.7.14-1.el7.x86_64
> > 
> > glusterfs-geo-replication-3.7.14-1.el7.x86_64
> > 
> > glusterfs-cli-3.7.14-1.el7.x86_64
> > 
> > glusterfs-client-xlators-3.7.14-1.el7.x86_64
> > 
> > 
> > 
> > ============================================================
> > 
> > [root@eqappsrvp01 mule1]# gluster volume stop mule1 Stopping volume
> > will make its data inaccessible. Do you want to continue? (y/n) y volume
> > stop: mule1:
> > failed: geo-replication sessions are active for the volume mule1.
> > 
> > Stop geo-replication sessions involved in this volume. Use 'volume
> > geo-replication status' command for more info.
> > 
> > [root@eqappsrvp01 mule1]# gluster volume geo-replication status
> > 
> > 
> > 
> > MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS
> > CRAWL STATUS LAST_SYNCED
> > 
> > ----------------------------------------------------------------------
> > ----------------------------------------------------------------------
> > ----------
> > 
> > eqappsrvp01 gitlab_data /data/gitlab_data root
> > ssh://eqappsrvd02::gitlab_data N/A Stopped N/A N/A
> > 
> > eqappsrvp02 gitlab_data /data/gitlab_data root
> > ssh://eqappsrvd02::gitlab_data N/A Stopped N/A N/A
> > 
> > [root@eqappsrvp01 mule1]# uname -a
> > 
> > Linux eqappsrvp01 3.10.0-327.36.3.el7.x86_64 #1 SMP Thu Oct 20
> > 04:56:07 EDT
> > 2016 x86_64 x86_64 x86_64 GNU/Linux
> > 
> > [root@eqappsrvp01 mule1]# cat /etc/redhat-release Red Hat Enterprise
> > Linux Server release 7.2 (Maipo)
> > =============================================================
> > 
> > 
> > 
> > I search the internet found in Redhat Bugzilla bug 1342431 seems to
> > address this problem but according to its status should be fixed in
> > 3.7.12 but in my version 3.7.14 it still exist.
> > 
> > 
> > 
> > Thanks
> > 
> > 
> > 
> > Ping.
> > 
> > 
> > 
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users@xxxxxxxxxxx
> > http://www.gluster.org/mailman/listinfo/gluster-users
> 
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux