Re: Slow healing times on large cinder and nova volumes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Could you attach log files please.
You said the bricks are replaced. In case of brick-replacement, index based self-heal doesn't work so full self-heal needs to be triggered using "gluster volume heal <volname> full". Could you confirm if that command is issued?

Pranith
----- Original Message -----
> From: "Larry Schmid" <lschmid@xxxxxx>
> To: gluster-users@xxxxxxxxxxx
> Sent: Tuesday, April 22, 2014 4:07:39 AM
> Subject:  Slow healing times on large cinder and nova volumes
> 
> 
> 
> Hi guys,
> 
> 
> 
> x-posted from irc.
> 
> 
> 
> We're having an issue on our prod openstack environment, which is backed by
> gluster using two replicas (I know. I wasn't given a choice.)
> 
> 
> 
> We lost storage on one of the replica servers and so had to replace failed
> bricks. The heal operation on Cinder and Nova volumes is coming up on the
> two-week mark and it seems as if it will never catch up and finish.
> 
> 
> 
> Nova heal info shows a constantly fluctuating list with multiple heals on
> many of the files, as if it's trying to keep up with deltas. It’s at 860GB
> of 1.1TB.
> 
> 
> 
> Cinder doesn't really seem to progress. It's at about 1.9T out of 6T
> utilized, though the total sparse file size totals about 30T. It also has
> done multiple heals on the some files.
> 
> 
> 
> I seem to be down to just watching it spin. Any help or tips?
> 
> 
> 
> Thanks,
> 
> 
> 
> Larry Schmid | Principal Cloud Engineer
> 
> 
> 
> IO
> 
> 
> 
> M +1.602.316.8639 | O +1.602.273.5431
> 
> E lschmid@xxxxxx | io.com
> 
> 
> 
> 
> Founded in 2007, IO is a worldwide leader in software defined data center
> technology, services and solutions that enable businesses and governments to
> intelligently control their information.
> 
> The communication contained in this e-mail is confidential and is intended
> only for the named recipient(s) and may contain information that is
> privileged, proprietary, attorney work product or exempt from disclosure
> under applicable law. If you have received this message in error, or are not
> the named recipient(s), please note that any form of distribution, copying
> or use of this communication or the information in it is strictly prohibited
> and may be unlawful. Please immediately notify the sender of the error, and
> delete this communication including any attached files from your system.
> Thank you for your cooperation.
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users





[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux