On 11/13/2013 06:32 PM, Jeff Darcy wrote:
Saw this today: http://www.enterprisetech.com/2013/11/08/cluster-sizes-reveal-hadoop-maturity-curve/ Sure, it's about Hadoop clusters, but that is one of our use cases. It looks like we might have to target 10K nodes instead of 1K for any changes to (or replacements of) glusterd's membership/heartbeat code.
Agree with you. We need to aim for unlimited scale before reality hits us :).
Makes me wonder what would be a typical deployment scenario - would we have a single volume that spans around 10K nodes? If yes, what are the scalability problems that we foresee? DHT's directory spread is on the top of my mind. Would the directory spread count option be good enough to address this?
-Vijay