Re: disperse volume brick counts limits in RHES

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



so the bottleneck is that computations with 16x20 matrix require  ~4 times the cycles?  It seems then that there is ample room for improvement, as there are many linear algebra packages out there that scale better than O(nxm).  Is the healing time dominated by the EC compute time?  If Serkan saw a hard 2x scaling then it seems likely.

-Alastair




On 8 May 2017 at 03:02, Xavier Hernandez <xhernandez@xxxxxxxxxx> wrote:
On 05/05/17 13:49, Pranith Kumar Karampuri wrote:


On Fri, May 5, 2017 at 2:38 PM, Serkan Çoban <cobanserkan@xxxxxxxxx
<mailto:cobanserkan@xxxxxxxxx>> wrote:

    It is the over all time, 8TB data disk healed 2x faster in 8+2
    configuration.


Wow, that is counter intuitive for me. I will need to explore about this
to find out why that could be. Thanks a lot for this feedback!

Matrix multiplication for encoding/decoding of 8+2 is 4 times faster than 16+4 (one matrix of 16x16 is composed by 4 submatrices of 8x8), however each matrix operation on a 16+4 configuration takes twice the amount of data of a 8+2, so net effect is that 8+2 is twice as fast as 16+4.

An 8+2 also uses bigger blocks on each brick, processing the same amount of data in less I/O operations and bigger network packets.

Probably these are the reasons why 16+4 is slower than 8+2.

See my other email for more detailed description.

Xavi




    On Fri, May 5, 2017 at 10:00 AM, Pranith Kumar Karampuri
    <pkarampu@xxxxxxxxxx <mailto:pkarampu@xxxxxxxxxx>> wrote:
    >
    >
    > On Fri, May 5, 2017 at 11:42 AM, Serkan Çoban
    <cobanserkan@xxxxxxxxx <mailto:cobanserkan@xxxxxxxxx>> wrote:
    >>
    >> Healing gets slower as you increase m in m+n configuration.
    >> We are using 16+4 configuration without any problems other then heal
    >> speed.
    >> I tested heal speed with 8+2 and 16+4 on 3.9.0 and see that heals on
    >> 8+2 is faster by 2x.
    >
    >
    > As you increase number of nodes that are participating in an EC
    set number
    > of parallel heals increase. Is the heal speed you saw improved per
    file or
    > the over all time it took to heal the data?
    >
    >>
    >>
    >>
    >> On Fri, May 5, 2017 at 9:04 AM, Ashish Pandey
    <aspandey@xxxxxxxxxx <mailto:aspandey@xxxxxxxxxx>> wrote:
    >> >
    >> > 8+2 and 8+3 configurations are not the limitation but just
    suggestions.
    >> > You can create 16+3 volume without any issue.
    >> >
    >> > Ashish
    >> >
    >> > ________________________________
    >> > From: "Alastair Neil" <ajneil.tech@xxxxxxxxx
    <mailto:ajneil.tech@xxxxxxxxx>>
    >> > To: "gluster-users" <gluster-users@xxxxxxxxxxx
    <mailto:gluster-users@gluster.org>>
    >> > Sent: Friday, May 5, 2017 2:23:32 AM
    >> > Subject: disperse volume brick counts limits in
    RHES
    >> >
    >> >
    >> > Hi
    >> >
    >> > we are deploying a large (24node/45brick) cluster and noted
    that the
    >> > RHES
    >> > guidelines limit the number of data bricks in a disperse set to
    8.  Is
    >> > there
    >> > any reason for this.  I am aware that you want this to be a
    power of 2,
    >> > but
    >> > as we have a large number of nodes we were planning on going
    with 16+3.
    >> > Dropping to 8+2 or 8+3 will be a real waste for us.
    >> >
    >> > Thanks,
    >> >
    >> >
    >> > Alastair
    >> >
    >> >
    >> > _______________________________________________
    >> > Gluster-users mailing list
    >> > Gluster-users@xxxxxxxxxxx <mailto:Gluster-users@gluster.org>
    >> > http://lists.gluster.org/mailman/listinfo/gluster-users
    <http://lists.gluster.org/mailman/listinfo/gluster-users>
    >> >
    >> >
    >> > _______________________________________________
    >> > Gluster-users mailing list
    >> > Gluster-users@xxxxxxxxxxx <mailto:Gluster-users@gluster.org>
    >> > http://lists.gluster.org/mailman/listinfo/gluster-users
    <http://lists.gluster.org/mailman/listinfo/gluster-users>
    >> _______________________________________________
    >> Gluster-users mailing list
    >> Gluster-users@xxxxxxxxxxx <mailto:Gluster-users@gluster.org>
    >> http://lists.gluster.org/mailman/listinfo/gluster-users
    <http://lists.gluster.org/mailman/listinfo/gluster-users>
    >
    >
    >
    >
    > --
    > Pranith




--
Pranith


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux