Re: moving a new hardware to cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Martin Thanks for you reply !

Yes I am using "osd recovery op priority", "osd max backfills", "osd recovery max active" and "osd client op priorit" to trying  minimize the impact in the cluster expansion.

My Ceph version is 10.2.7 Jewel and I am moving 1 osd waiting the recovery and go to another one, because I have a history of move an entire storage and this change create slow requests and pg blocking.

I asking this question early to try understand the experience of another ceph admins with this scenario.

Thanks and best Regards,
Fabio Abreu
 

On Wed, Jan 30, 2019 at 5:37 PM Martin Verges <martin.verges@xxxxxxxx> wrote:
Hello Fabio,

you can use the "osd recovery sleep" option to prevent trouble while recovery/rebalancing happens. Other than that, options like "osd recovery op priority", "osd max backfills", "osd recovery max active", "osd client op priority" and other might help you depending on your cluster version, configuration and hardware.
As we believe a good Ceph solution should make your live easier, we have build a option slider within the maintenance view in our software. You can see this in the attached screenshot. Maybe you give it a try!

Please feel free to contact us if you need assistance with your cluster.

--
Martin Verges
Managing director

Mobile: +49 174 9335695
E-Mail: martin.verges@xxxxxxxx
Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx


Am Mi., 30. Jan. 2019 um 12:38 Uhr schrieb Fabio Abreu <fabioabreureis@xxxxxxxxx>:
Hi everybody,

I have a doubt about moving a new sata storage(new hardware too) inside of production rack with a huge amount data.

I thinks this movimentation creates news pgs and can be reduce my performance if i do this wrong and we don't a lot experience in a new hardware move inside cluster.

Can someome recommend me what I should review before the new hardware move ?

if I move osd to the cluster can I have more precaution in this scenario ? 

Regards,

Fabio Abreu Reis
http://fajlinux.com.br
Tel : +55 21 98244-0161
Skype : fabioabreureis
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Atenciosamente,
Fabio Abreu Reis
http://fajlinux.com.br
Tel : +55 21 98244-0161
Skype : fabioabreureis
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux