On Thursday 03 May 2012 11:52:13 you wrote: > 2012/5/2 Arnold Krille <arnold at arnoldarts.de> > > As far as I understand it (and for the current stable 3.2) there are two > > ways > > depending on what you want: > > When you want a third copy of the data, it seems you can't simply > > increase > > the replication-level and add a brick. Instead you have to stop usage, > > delete > > the volume (without deleting the underlying bricks of course) and then > > rebuild > > the volume with the new number of replication and bricks. Then self-heal > > should do the trick and copy the data onto the third machine. > I can't stop a production volume used by many customers. Yes you can. There are scheduled maintainance-times. Unless your SLA is about 100% availability (in which case you are f***ed as every outage will make you miss your SLA). And the outage I outlined should be something like 30s when done by a (tested) script, 1-2min when done by hand. Could be that you can even keep it mounted (as glusterfs) on the clients. Though I haven't tested this. > I think that the best way should start with the correct number of > replication nodes > even if one of these node is not present. Good luck with that. While you can do that with software-raid and with drbd, you can't do that with gluster. When you create a volume with replication set, it only accepts multiple of that replication for the number of bricks. I am told that 3.3 will have the ability to change the replication-level while the volume is up. And I think I was even promised dynamic replication from the kind "here are three bricks, give me two replicas of everything". > In this way, the volume is created properly, and when needed I have to just > add the new machine and trigger the self-healing. Either wait for 3.3 or do the distributed-replicated-volume way I outlined earlier. Have fun, Arnold -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part. URL: <http://gluster.org/pipermail/gluster-users/attachments/20120503/027dcdc5/attachment.pgp>