Hi Andreas,
Before rebooting, I have tried some performance tuning inorder to prevent timeout errors. As we have sufficient RAM and cpu power, I have increased transport.listen-backlog in Kernel and syn_backlog and max-connections in Kernel. So, I expected that it won't cause a problem. Also the NFS clients are mounted but not being used. And all the nodes are in same network.
My assumption was that some slowness in the beginning can be seen, which will be resolved automatically.
Is it still a base idea to have 72 nodes and starting them at once?
Regards,
Jeevan.
On Sun, Nov 25, 2018, 6:00 PM Andreas Davour <ante@xxxxxxxxxxxx wrote:
72 nodes!!???
Can the common wisdom come to the rescue here. Does this even work? Wont
the translator overhead make so many nodes scale terribly?
Are people building clusters that big, and getting any performance at
all?
/andreas
On Sun, 25 Nov 2018, Jeevan Patnaik wrote:
> Hi,
>
> I have different issues:
>
> I have restarted glusterd service on my 72 nodes almost parallelly with
> ansible while the gluster NFS clients are in mounted state
>
> After that many of the gluster peers went to rejected state. In logs, I see
> msg id 106010 stating that checksum doesn't match.
>
> I'm confused which checksum is that and how is it changed after I restart.
>
> I restarted because gluster volume status commands gives timeout. I have
> tiering enabled on the volume and was trying to detach. And that too never
> completed. The status shows only in progress even the tiered volume
> contains only a few 100 8MB filese I created for testing.
>
> my overall experience with gluster tiering is really bad :(
>
> Besides, what's the best way to restore old state if something goes wrong?
> Till now, I have been using no volfile at all.. I only use volume status
> commands to configure my cluster. Do I need to use a volfile inorder to
> restore something?
>
> Gluster version is 3.12.15
> I have checked the op Version on all nodes and they all are same.
>
>
> Regards
> Jeevan।
>
--
"economics is a pseudoscience; the astrology of our time"
Kim Stanley Robinson
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users