Hi All— Thanks for the responses. I am mainly curious about performance impact for read/write workloads associated with metadata updates as the number of nodes increase.
Any commentary on performance impact specific to various read/write random/sequential IO scenario as the scale increases? Not particularly worried about restart/reboot condition as that is an edge use case for us. Thanks, Mayur From: Atin Mukherjee [mailto:amukherj@xxxxxxxxxx]
On Tue, 31 Oct 2017 at 03:32, Mayur Dewaikar <mdewaikar@xxxxxxxxxxxxx> wrote:
The current design of GlusterD is not capable of handling too many nodes in the cluster specially on the node restart/reboot condition. We have heard about deployments with ~100-150 nodes where things are stable but in node reboot scenario
some special tweaking of parameters like network.listen-backlog is required to ensure TCP packets don’t get overflowed resulting into connection between brick to glusterd fail. GlusterD2 project will definitely address this aspect of the problems. Also since all the directory layouts are replicated on all the bricks of a volume, mkdir, unlink or any other directory operations are costly and with more number of bricks this impacts the latency. We’re also working on a project called
RIO to address this issue.
-- - Atin (atinm) ***************************Legal Disclaimer*************************** "This communication may contain confidential and privileged material for the sole use of the intended recipient. Any unauthorized review, use or distribution by others is strictly prohibited. If you have received the message by mistake, please advise the sender by reply email and delete the message. Thank you." **********************************************************************
|
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users