Re: advice needed on configuring large gluster cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Thanks Serkan, I appreciate the feedback.  Do you know if there are any
> associated performance issues with odd values of parity disks? e.g any
> reason to not go with 16+3?, also are there any things to watch for in terms
> of the number of distributed disperse sets?
AFAIK, there is no performance concern for n for m+n configuration, n affects
how many bricks/nodes will be tolerated in case of failure. Number of
disperse sets
are related with your workload and capacity. EC is not recommended for
small files and random
I/O so as I said before do your test with your prod workload.


>
> -Alastair
>
> On 15 March 2017 at 13:32, Serkan Çoban <cobanserkan@xxxxxxxxx> wrote:
>>
>> Please find my comments inline.
>>
>> > Hi
>> >
>> > we have a new gluster cluster we are planning on deploying.  We will
>> > have 24
>> > nodes each with JBOD, 39 8TB drives and 6, 900GB SSDs, and FDR IB
>> >
>> > We will not be using all of this as one volume , but I thought initially
>> > of
>> > using a distributed disperse volume.
>> >
>> > Never having attempted anything on this scale I have a couple of
>> > questions
>> > regarding EC and distibuted disperse volumes.
>> >
>> > Does a distributed dispersed volume have to start life as distributed
>> > dispersed, or can I  take a disperse volume and make it distributed by
>> > adding bricks?
>> Yes you can start with one subvolume and later you can increase the
>> subvolumes.
>> But be careful about planning, if you start with m+n EC configuration,
>> you can add
>> another m+n subvolume to it.
>> >
>> > Does an EC scheme of 24+4 seem reasonable?  One requirement we will have
>> > is
>> > the need to tolerate two nodes down at once, as the nodes share a
>> > chassis.
>> > I assume that  distributed disperse volumes can be expanded in a similar
>> > fashion to distributed replicate volumes by adding additional disperse
>> > brick
>> > sets?
>> It is recommended in m+n configuration m should be power of two.
>> You can do 16+4 or 8+2. Higher m will cause slower healing but
>> parallel self heal
>> of EC volumes in 3.9+ will help. 8+2 configuration with one brick from
>> every node will
>> tolerate loss of two nodes.
>>
>> >
>> > I would also like to consider adding a hot-tier using the SSDs,  I
>> > confess I
>> > have not done much reading on tiering, but am hoping I can use a
>> > different
>> > volume form for the hot tier.  Can I use create a disperse, or a
>> > distributed
>> > replicated?   If I am smoking rainbows then I can consider setting up a
>> > SSD
>> > only distributed disperse volume.
>> EC performance is quite good for our workload,I did not try any tier
>> in front of it
>> Test your workload without tier, if it works then KISS
>> >
>> > I'd also appreciate any feedback on likely performance issues and tuning
>> > tips?
>> You can find kernel performance tuning here:
>>
>> https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Linux%20Kernel%20Tuning/
>> You may also change client.event-threads, server.event-threads and
>> heal related parameters
>> but do not forget to test your workload after and before changing those
>> values.
>> >
>> > Many Thanks
>> >
>> > -Alastair
>> >
>> >
>> >
>> > _______________________________________________
>> > Gluster-users mailing list
>> > Gluster-users@xxxxxxxxxxx
>> > http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux