glusterfs after stoping glusterfs we can't start it

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



  Vale,

Were you running commands from the cli from multiple machines 
simultaneously?
Could you attach glusterd logs of glusterd from all the machines in the 
cluster?
It would be either,
  /usr/local/var/log/glusterfs/usr-local-etc-glusterfs-glusterd.vol.log
or
/var/log/glusterfs/etc-glusterfs-glusterd.vol.log

based on your mode of installation.

thanks,
kp

On 11/03/2011 05:58 PM, M. Vale wrote:
> HI, using gluster in replicated, we have the following conf:
>
> Volume Name: volume01
> Type: Distributed-Replicate
> Status: Started
> Number of Bricks: 4 x 2 = 8
> Transport-type: tcp
> Bricks:
> Brick1: gluster01:/mnt
> Brick2: gluster02:/mnt
> Brick3: gluster03:/mnt
> Brick4: gluster04:/mnt
> Brick5: gluster05:/mnt
> Brick6: gluster06:/mnt
> Brick7: gluster51:/mnt
> Brick8: gluster52:/mnt
> Options Reconfigured:
> cluster.data-self-heal-algorithm: full
> performance.io-thread-count: 64
> diagnostics.brick-log-level: INFO
>
>
> The we did:
>
> gluster volume stop volume01
>
> And it took several minutes, after that running gluster volume info gives:
>
>
> Volume Name: volume01
> Type: Distributed-Replicate
> Status: Stopped
> Number of Bricks: 4 x 2 = 8
> Transport-type: tcp
> Bricks:
> Brick1: gluster01:/mnt
> Brick2: gluster02:/mnt
> Brick3: gluster03:/mnt
> Brick4: gluster04:/mnt
> Brick5: gluster05:/mnt
> Brick6: gluster06:/mnt
> Brick7: gluster51:/mnt
> Brick8: gluster52:/mnt
> Options Reconfigured:
> cluster.data-self-heal-algorithm: full
> performance.io-thread-count: 64
> diagnostics.brick-log-level: INFO
>
>
> But now if I do: gluster volume start volume01, gives the following error:
>
> operation failed
>
> If I do gluster volume reset the same thing:
>
> gluster volume reset volume01
> operation failed
>
> And if I try to stop again:
>
> gluster volume stop volume01
> Stopping volume will make its data inaccessible. Do you want to 
> continue? (y/n) y
> operation failed
>
>
> This occurs using gluster 3.2 on Centos 6.0
>
>
> Where do I start looking so I can start the volume again?
>
> Thanks
> MV
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gluster.org/pipermail/gluster-users/attachments/20111103/46c362c3/attachment.htm>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux