Re: Adding multiple OSDs to existing cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Wed, 17 Feb 2016 11:18:40 +0000 Ed Rowley wrote:

> Hi,
> 
> We have been running Ceph in production for a few months and looking
> at our first big expansion. We are going to be adding 8 new OSDs
> across 3 hosts to our current cluster of 13 OSD across 5 hosts. We
> obviously want to minimize the amount of disruption this is going to
> cause but we are unsure about the impact on the crush map as we add
> each OSD.
> 
So you are adding new hosts as well?

> From the docs I can see that an OSD is added as 'in' and 'down' and
> wont get objects until the OSD service has started. But what happens
> to the crushmap while the OSD is 'down', is it recalculated? are
> objects misplaced and moved on the existing cluster?
> 
Yes, even more so when adding hosts (well, the first OSD on a new host).

Find my "Storage node refurbishing, a  "freeze" OSD feature would be nice" 
thread in the ML archives.

Christian

> We think we would like to limit the rebuild of the crush map, is this
> possible or beneficial.
> 
> Thanks,
> 
> Ed Rowley
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux