Yes, I stopped the glusterfs service on the damaged system but zfs still won't allow me to umount the filesystem. Maybe I should try to shutdown the entire system. On Wed, Jan 9, 2013 at 10:28 AM, Daniel Taylor <dtaylor at vocalabs.com> wrote: > > On 01/09/2013 08:31 AM, Liang Ma wrote: > >> >> Hi Daniel, >> >> Ok, if gluster can't self-heal from this situation, I hope at least I can >> manually restore the volume by using the good brick available. So would you >> please tell me how can I "simply rebuild the filesystem and let gluster >> attempt to restore it from a *clean* filesystem"? >> >> >> Trimmed for space. > > You could do as Tom Pfaff suggests, but given the odds of data corruption > carrying forward I'd do the following: > Shut down gluster on the damaged system. > Unmount the damaged filesystem. > Reformat the damaged filesystem as new (throwing away any potential > corruption that might not get caught on rebuild) > Mount the new filesystem at the original mount point > Restart gluster > > In the event of corruption due to hardware failure you'd be doing this on > replacement hardware. > The key is you have to have a functional filesystem for gluster to work > with. > > > -- > Daniel Taylor VP Operations Vocal Laboratories, Inc > dtaylor at vocalabs.com 612-235-5711 > > ______________________________**_________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://supercolony.gluster.**org/mailman/listinfo/gluster-**users<http://supercolony.gluster.org/mailman/listinfo/gluster-users> > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130110/5b0cf6ed/attachment.html>