Re: Adding new disk/OSD to ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



some supplement:
#2:ceph support heterogeneous nodes.
#3.I think if you add an OSD by hand,you should set the  `osd crush
reweigth` to 0 first
and then increase it to suit the disk size.and degrade the priority ,
thread of recover and backfill.just like this:

osd_max_backfills 1
osd_recovery_max_active 1
osd_backfill_scan_min = 4
osd_backfill_scan_max = 32
osd recovery threads = 1
osd recovery op priority = 1

This ensures that in the process of adding nodes the cluster still
maintain good performance when the data recovery.



2016-04-09 20:53 GMT+08:00  <ceph@xxxxxxxxxxxxxx>:
> Without knowing proxmox specific stuff ..
>
> #1: just create an OSD the regular way
> #2: it is safe; However, you may, either create a spoof
> (osd_crush_chooseleaf_type = 0), or underuse your cluster
> (osd_crush_chooseleaf_type = 1)
>
> On 09/04/2016 14:39, Mad Th wrote:
>> We have a 3 node proxmox/ceph cluster ... each with 4 x4 TB disks
>>
>> 1) If we want to add more disks , what are the things that we need to be
>> careful about?
>>
>>
>> Will the following steps automatically add it to ceph.conf?
>> ceph-disk zap /dev/sd[X]
>> pveceph createosd /dev/sd[X] -journal_dev /dev/sd[Y]
>>
>> where X is new disk and Y is the journal disk.
>>
>> 2) Is it safe to run different number of OSDs in the cluster, say one
>> server with 5 OSD and other two servers with 4OSD ? Though we have plan to
>> add one OSD to each server.
>>
>>
>> 3) How do we safely add the new OSD to an existing storage pool?
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux