Re: Adding multiple OSDs to existing cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 17 February 2016 at 14:59, Christian Balzer <chibi@xxxxxxx> wrote:
>
> Hello,
>
> On Wed, 17 Feb 2016 13:44:17 +0000 Ed Rowley wrote:
>
>> On 17 February 2016 at 12:04, Christian Balzer <chibi@xxxxxxx> wrote:
>> >
>> > Hello,
>> >
>> > On Wed, 17 Feb 2016 11:18:40 +0000 Ed Rowley wrote:
>> >
>> >> Hi,
>> >>
>> >> We have been running Ceph in production for a few months and looking
>> >> at our first big expansion. We are going to be adding 8 new OSDs
>> >> across 3 hosts to our current cluster of 13 OSD across 5 hosts. We
>> >> obviously want to minimize the amount of disruption this is going to
>> >> cause but we are unsure about the impact on the crush map as we add
>> >> each OSD.
>> >>
>> > So you are adding new hosts as well?
>> >
>>
>> Yes, we are adding 2 new hosts with 3 OSDs  and adding two drives/OSDs
>> to an existing host
>>
> Nods.
>
>> >> From the docs I can see that an OSD is added as 'in' and 'down' and
>> >> wont get objects until the OSD service has started. But what happens
>> >> to the crushmap while the OSD is 'down', is it recalculated? are
>> >> objects misplaced and moved on the existing cluster?
>> >>
>> > Yes, even more so when adding hosts (well, the first OSD on a new
>> > host).
>> >
>> > Find my "Storage node refurbishing, a  "freeze" OSD feature would be
>> > nice" thread in the ML archives.
>> >
>> > Christian
>> >
>>
>> Thanks for the reference, the thread is useful,
>>
>> I am right with the assumption that adding an OSD with:
>>
>> [osd]
>> osd_crush_initial_weight = 0
>>
> Or by adding it with a weight of zero like this:
>
> ceph osd crush add <osdnumber> 0 host=<osdhost>
>

Thanks, we will give it a try.

>> will not change the existing crush map
>>
> Well it will change it (ceph osd tree will show it), but no data movement
> will result from it, yes.
>
> Christian
>>
>> >> We think we would like to limit the rebuild of the crush map, is this
>> >> possible or beneficial.
>> >>
>> >> Thanks,
>> >>
>> >> Ed Rowley
>> >> _______________________________________________
>> >> ceph-users mailing list
>> >> ceph-users@xxxxxxxxxxxxxx
>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >>
>> >
>> >
>> > --
>> > Christian Balzer        Network/Systems Engineer
>> > chibi@xxxxxxx           Global OnLine Japan/Rakuten Communications
>> > http://www.gol.com/
>>
>> Regards,
>>
>> Ed Rowley
>>
>
>
> --
> Christian Balzer        Network/Systems Engineer
> chibi@xxxxxxx           Global OnLine Japan/Rakuten Communications
> http://www.gol.com/
Regards,

Ed Rowley
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux