[Gluster-devel] Gluster 3.3 / Stripe+Replicat / Healing+Locking on VM's

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sending to gluster-users 

-JM 

----- Original Message -----

> Hi,

> I was under the impression that under Gluster 3.3 the self-heal
> process would only lock parts of files as required
> during the self-heal process ??

> I have a 3.3 setup running here and earlier rebooted one of the
> storage nodes. Replication meant that the volume
> holding around 20 VM's (400G's) kept on running quite happily.
> However, when Gluster restarted and kicked off
> it's self heal, it queued and LOCKED all 20 VM's, unlocking images as
> the healing process finished on each one,
> over a period of a number of hours (!)

> I'm using 3.3.0 release from the semiosis PPA on Ubuntu 12.04.

> Is there a trick to making this work properly, or is there a fix due
> out that will correct this behaviour ??

> tia
> Gareth.

> volume enc-client-0
> type protocol/client
> option remote-host 10.1.0.1
> option remote-subvolume /srv/enc
> option transport-type tcp
> option username ***
> option password ***
> end-volume

> volume enc-client-1
> type protocol/client
> option remote-host 10.2.0.4
> option remote-subvolume /srv/enc
> option transport-type tcp
> option username ***
> option password ***
> end-volume

> volume enc-client-2
> type protocol/client
> option remote-host 10.2.0.3
> option remote-subvolume /srv/enc
> option transport-type tcp
> option username ***
> option password ***
> end-volume

> volume encr-client-3
> type protocol/client
> option remote-host 10.1.0.2
> option remote-subvolume /srv/enc
> option transport-type tcp
> option username ***
> option password ***
> end-volume

> volume enc-replicate-0
> type cluster/replicate
> option background-self-heal-count 0
> option metadata-self-heal on
> option data-self-heal on
> option entry-self-heal on
> option self-heal-daemon on
> option iam-self-heal-daemon yes
> subvolumes enc-client-0 enc-client-1
> end-volume

> volume enc-replicate-1
> type cluster/replicate
> option background-self-heal-count 0
> option metadata-self-heal on
> option data-self-heal on
> option entry-self-heal on
> option self-heal-daemon on
> option iam-self-heal-daemon yes
> subvolumes enc-client-2 enc-client-3
> end-volume
> #

> volume glustershd
> type debug/io-stats
> subvolumes enc-replicate-0 enc-replicate-1
> end-volume

> ??
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> https://lists.nongnu.org/mailman/listinfo/gluster-devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121005/b4128ea8/attachment.html>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux