Re: Multi petabyte gluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Did you test healing by increasing disperse.shd-max-threads?
What is your heal times per brick now?

On Fri, Jun 30, 2017 at 8:01 PM, Alastair Neil <ajneil.tech@xxxxxxxxx> wrote:
> We are using 3.10 and have a 7 PB cluster.  We decided against 16+3 as the
> rebuild time are bottlenecked by matrix operations which scale as the square
> of the number of data stripes.  There are some savings because of larger
> data chunks but we ended up using 8+3 and heal times are about half compared
> to 16+3.
>
> -Alastair
>
> On 30 June 2017 at 02:22, Serkan Çoban <cobanserkan@xxxxxxxxx> wrote:
>>
>> >Thanks for the reply. We will mainly use this for archival - near-cold
>> > storage.
>> Archival usage is good for EC
>>
>> >Anything, from your experience, to keep in mind while planning large
>> > installations?
>> I am using 3.7.11 and only problem is slow rebuild time when a disk
>> fails. It takes 8 days to heal a 8TB disk.(This might be related with
>> my EC configuration 16+4)
>> 3.9+ versions has some improvements about this but I cannot test them
>> yet...
>>
>> On Thu, Jun 29, 2017 at 2:49 PM, jkiebzak <jkiebzak@xxxxxxxxx> wrote:
>> > Thanks for the reply. We will mainly use this for archival - near-cold
>> > storage.
>> >
>> >
>> > Anything, from your experience, to keep in mind while planning large
>> > installations?
>> >
>> >
>> > Sent from my Verizon, Samsung Galaxy smartphone
>> >
>> > -------- Original message --------
>> > From: Serkan Çoban <cobanserkan@xxxxxxxxx>
>> > Date: 6/29/17 4:39 AM (GMT-05:00)
>> > To: Jason Kiebzak <jkiebzak@xxxxxxxxx>
>> > Cc: Gluster Users <gluster-users@xxxxxxxxxxx>
>> > Subject: Re:  Multi petabyte gluster
>> >
>> > I am currently using 10PB single volume without problems. 40PB is on
>> > the way. EC is working fine.
>> > You need to plan ahead with large installations like this. Do complete
>> > workload tests and make sure your use case is suitable for EC.
>> >
>> >
>> > On Wed, Jun 28, 2017 at 11:18 PM, Jason Kiebzak <jkiebzak@xxxxxxxxx>
>> > wrote:
>> >> Has anyone scaled to a multi petabyte gluster setup? How well does
>> >> erasure
>> >> code do with such a large setup?
>> >>
>> >> Thanks
>> >>
>> >> _______________________________________________
>> >> Gluster-users mailing list
>> >> Gluster-users@xxxxxxxxxxx
>> >> http://lists.gluster.org/mailman/listinfo/gluster-users
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users@xxxxxxxxxxx
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux