Thanks, Ian. Fred - the method Ian describes, using self heal, is a good way of doing it while maintaining gluster semantics and not playing around directly with the backend. In a future release, with dynamic volume management, one would be able to do such things with simple commands with having the data online all the while. Regards, Tejas. ----- Original Message ----- From: "Ian Rogers" <ian.rogers at contactclean.com> To: gluster-users at gluster.org Sent: Wednesday, April 14, 2010 8:43:19 PM Subject: Re: Maintainance mode for bricks On 14/04/2010 13:20, Fred Stober wrote: > On Wednesday 14 April 2010, Tejas N. Bhise wrote: > >> Fred, >> >> Would you like to tell us more about the use case ? Like why would you want >> to do this ? If we take a brick out, it would not be possible to get it >> back in ( with the existing data ). >> > Ok, here is our use case: > We have a small test system running on 3 file servers. cluster/distribute is > used to give a flat view of the file servers. Now have the problem that one > file server is going to be replaced with a larger one. Therefore we want to > put the old file server into read only mode to rsync the files to the new > server. Unfortunately this will take ~2days. During this time it would be > nice to keep the glusterfs in read/write mode. > > If I understand it correctly, I should be able to use "lookup-unhashed" to > reintegrate the new fileserver in the existing file system, when we switch > off the old server. > > Cheers, > Fred > > Could you use gluster to put the new server and old one into a cluster/replicate pair so it looks just like one server to the cluster/distribute above it? Then do rsync or let gluster copy everything across with a "self heal". When the new one is up to date just disable the old one and remove the cluster/replicate. -- www.ContactClean.com Making changing email address as easy as clicking a mouse. Helping you keep in touch. _______________________________________________ Gluster-users mailing list Gluster-users at gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users