Re: 3.7.13 two node ssd solid rock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



ahhh i'm giving up running gluster on production...  good thing  i have a replication and i can do hybrid networking with my DR site but there'a an app that network senstive so i have to  bump my bandwidth hopefully im not going to pay additional on it.  by the way my hpsan two nodes, also died on the blaockout but never had corruption.


On Wednesday, August 3, 2016 6:18 PM, Leno Vo <lenovolastname@xxxxxxxxx> wrote:


i have to reboot each node to make it working with a time interval of 5-8 mins, after that it got stable but still lots of sharding didn't heal but there's no split-brain.  some vms lost it's vmx, so i created new vm and put to the storage to make working, wewwwww!!!

sharding is still faulty, won't recommend on yet.  going back without it.


On Wednesday, August 3, 2016 4:34 PM, Leno Vo <lenovolastname@xxxxxxxxx> wrote:


my mistakes, the corruption happened after 6 hours, some vm had sharding won't heal but there's no split brain....


On Wednesday, August 3, 2016 11:13 AM, Leno Vo <lenovolastname@xxxxxxxxx> wrote:


One of my gluster 3713 is on two nodes only with samsung ssd 1tb pro raid 5 x3,it already crashed two time because of brown out and block out, it got production vms on it, about 1.3TB.

Never got split-brain, and healed quickly.  Can we say 3.7.13 two nodes with ssd is solid rock or just lucky?

My other gluster is on 3 nodes 3713, but one node never got up (old server proliant wants to retire), ssh raid 5 with combination sshd lol laptop seagate, it never healed about 586 occurences but there's no split-brain too.  and vms are intact too, working fine and fast.

ahh never turned on caching on the array, the esx might not come up right away, u need to go to setup first to make it work and restart and then you can go to array setup (hp array F8) and turned off caching.  then esx finally boot up.






_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux