Re: ec heal questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Does increasing any of below values helps ec heal speed?

performance.io-thread-count 16
performance.high-prio-threads 16
performance.normal-prio-threads 16
performance.low-prio-threads 16
performance.least-prio-threads 1
client.event-threads 8
server.event-threads 8


On Mon, Aug 8, 2016 at 2:48 PM, Ashish Pandey <aspandey@xxxxxxxxxx> wrote:
> Serkan,
>
> Heal for 2 different files could be parallel but not for a single file and
> different chunks.
> I think you are referring your previous mail in which you had to remove one
> complete disk.
>
> In this case heal starts automatically but it scans through each and every
> file/dir
> to decide if it needs heal or not. No doubt it is more time taking process
> as compared to index heal.
> If the data is 900GB then it might take lot of time.
>
> What configuration to choose depends a lot on your storage requirement,
> hardware capability and
> probability of failure of disk and network.
>
> For example : A small configuration  like 4+2 could help you in this
> scenario. You can have distributed disp volume of 4+2 config.
> In this case each sub vol have a comparatively less data. If a brick fails
> in that sub vol, it will have to heal only that much data and that too from
> reading 4 bricks only.
>
> dist-disp-vol
>
> subvol-1            subvol-2                subvol-3
> 4+2                        4+2                    4+2
> 4GB                    4GB                    4GB
> ^^^
> If a brick in this subvol-1 fails, it will be local to this subvol only and
> will require only 4GB of data to be healed which will require reading from 4
> disk only.
>
> I am keeping Pranith in CC to take his input too.
>
> Ashish
>
>
> ________________________________
> From: "Serkan Çoban" <cobanserkan@xxxxxxxxx>
> To: "Ashish Pandey" <aspandey@xxxxxxxxxx>
> Cc: "Gluster Users" <gluster-users@xxxxxxxxxxx>
> Sent: Monday, August 8, 2016 4:47:02 PM
> Subject: Re:  ec heal questions
>
>
> Is reading the good copies to construct the bad chunk is a parallel or
> sequential operation?
> Should I revert my 16+4 ec cluster to 8+2 because it takes nearly 7
> days to heal just one broken 8TB disk which has only 800GB of data?
>
> On Mon, Aug 8, 2016 at 1:56 PM, Ashish Pandey <aspandey@xxxxxxxxxx> wrote:
>>
>> Hi,
>>
>> Considering all the other factor same for both the configuration, yes
>> small
>> configuration
>> would take less time. To read good copies, it will take less time.
>>
>> I think, multi threaded shd is the only enhancement in near future.
>>
>> Ashish
>>
>> ________________________________
>> From: "Serkan Çoban" <cobanserkan@xxxxxxxxx>
>> To: "Gluster Users" <gluster-users@xxxxxxxxxxx>
>> Sent: Monday, August 8, 2016 4:02:22 PM
>> Subject:  ec heal questions
>>
>>
>> Hi,
>>
>> Assume we have 8+2 and 16+4 ec configurations and we just replaced a
>> broken disk in each configuration  which has 100GB of data. In which
>> case heal completes faster? Does heal speed has anything related with
>> ec configuration?
>>
>> Assume we are in 16+4 ec configuration. When heal starts it reads 16
>> chunks from other bricks recompute our chunks and writes it to just
>> replaced disk. Am I correct?
>>
>> If above assumption is true then small ec configurations heals faster
>> right?
>>
>> Is there any improvements in 3.7.14+ that makes ec heal faster?(Other
>> than multi-thread shd for ec)
>>
>> Thanks,
>> Serkan
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users@xxxxxxxxxxx
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://www.gluster.org/mailman/listinfo/gluster-users
>
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux