Re: Switch to replica 3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Mon, 20 Nov 2017 11:56:31 +0100 Matteo Dacrema wrote:

> Hi,
> 
> I need to switch a cluster of over 200 OSDs from replica 2 to replica 3
I presume this means the existing cluster and not adding 100 OSDs...
 
> There are two different crush maps for HDD and SSDs also mapped to two different pools.
>
> Is there a best practice to use? Can this provoke troubles?
> 
Are your SSDs a cache-tier or are they a fully separate pool?

As for troubles, how busy is your cluster during the recovery of failed
OSDs or deep scrubs?

There are 2 things to consider here:

1. The re-balancing and additional replication of all the data, which you
can control/ease by the various knobs present. Ceph version matters to
which are relevant/useful. It shouldn't impact things too much, unless
your cluster was at the very edge of it's capacity anyway.

2. The little detail that after 1) is done, your cluster will be
noticeably slower than before, especially in the latency department. 
In short, you don't just need to have the disk space to go 3x, but also
enough IOPS/bandwidth reserves.

Christian

> Thank you
> Matteo
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Rakuten Communications
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux