How do we resolve the "cannot self-heal" problem?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello from the Philippines!

Apparently we're the first company here to use Gluster and Enomaly, and
we've snagged on the same well-documented race condition lock-up. I read
that Gluster 3.2.2 does not suffer from this problem, so we upgraded.

Catch: We lost 1 of 4 storage nodes a week before the upgrade.
Issue: It seems that bringing that 1 guy back online is interfering with
self-heal on a number of files (VMs, really).

How do we resolve the "cannot self-heal" problem?


Regards,
Andro Mauricio



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux