Re: Adding new disk/OSD to ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Mad,

El 09/04/16 a las 14:39, Mad Th escribió:
We have a 3 node proxmox/ceph cluster ... each with 4 x4 TB disks 

Are you using 3-way replication? I guess you are. :)
1) If we want to add more disks , what are the things that we need to be careful about? 


Will the following steps automatically add it to ceph.conf?
ceph-disk zap /dev/sd[X]
pveceph createosd /dev/sd[X] -journal_dev /dev/sd[Y]
where X is new disk and Y is the journal disk.
Yes, this is the same as adding it from web GUI.

2) Is it safe to run different number of OSDs in the cluster, say one server with 5 OSD and other two servers with 4OSD ? Though we have plan to add one OSD to each server.

It is safe as long as none of your nodes OSDs are near-full. If you're asking this because you're adding a new OSD to each node, step by step; yes, it is safe.
Be prepared for data moving around when you add new disks. (performance will suffer unless you have tuned some parameters in ceph.conf)

3) How do we safely add the new OSD to an existing storage pool?
New OSD will be used automatically by existing ceph pools unless you have changed CRUSH map.

Cheers
Eneko

-- 
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943493611
      943324914
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
www.binovo.es
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux