Re: poor OS reinstalls with 3.7

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Feb 12, 2016, at 8:38 AM, Kaushal M <kshlmster@xxxxxxxxx> wrote:
> 
> Gluster should actually do that, provided the peer is in the 'Peer in
> cluster' state.

Nope.  It does not.  I did it twice.  It never repopulated.  I got it to repopulate all the peers however, one of the two times.  The trick there was to read an obscure google hit that said that you have to restart gluster over and over and over again to get it to make progress.  That’s stupid.  It should make _all_ the progress it will ever make, with one restart.  The first time, I didn’t restart it over and over and over again.

Hint, make probe repopulate all the data, if missing.  It doesn’t do that now.

>> Why can’t the document say, just edit the state variable and just copy vols to get it going again?
> 
> Which document did you refer to? I'm not aware of a document that
> describes how to recover a peer after the loss of /var/lib/glusterd.

Google allowed me to recover.  Sum total knowledge of man.  When software is reliable, all the weird google recovery instructions then become mere things of the past.  Also, if it is documented in the official document or the wiki, then that information will be easier to find.  When the software is less buggy, then that information can be removed.  I tried to find the link for you with those instructions, but alas, it would take a full day of time I suspect to re-find it.  :-(

The problem is that I can make the bricks very reliable.  I have a raid3z disk set for a brick.  I can loose 3 drives and never miss any data.  Since I can do os reinstalls easily and quickly, indeed, sometimes I change OSes (RHEL <-> Ubuntu <-> CentOS) while leaving the data alone.  The OS disk is more volatile, more unreliable (I can only loose 1 disk and not miss data) than a brick.  So, all state about bricks should be put into the brick itself.  I can enumerate by hand, if needed the brick for recovery.  They all follow a stylized name, and indeed, I’d prefer to set that name into the cluster data, and have gluster self-emumerate all the bricks upon filesystem scan if detects the data in /var/lib/glusterd is new. it can then probe the cluster itself upon start, and see if those machines and that the cluster is still alive and well, and if to, it can then recreate any data it needs into the OS area.

The point of HA, is to _not_ have the single point of failure.  var/lib is such a single point of failure in the current code.

> (We should probably put this down somewhere).

Yes, please do.

I looked around for a wiki, and didn’t see anything.  If there was one, I would have put all the interesting and useful bits into it. 

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux