Re: Expanding ceph cluster by adding more OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



There used to be, can't find it right now.  Something like 'ceph osd set pg_num <num>' then 'ceph osd set pgp_num <num>' to actually move your data into the new pg's.  I successfully did it several months ago, when bobtail was current.

Sent from my iPad

> On Oct 9, 2013, at 10:30 PM, Guang <yguang11@xxxxxxxxx> wrote:
> 
> Thanks Mike.
> 
> Is there any documentation for that?
> 
> Thanks,
> Guang
> 
>> On Oct 9, 2013, at 9:58 PM, Mike Lowe wrote:
>> 
>> You can add PGs,  the process is called splitting.  I don't think PG merging, the reduction in the number of PGs, is ready yet.
>> 
>>> On Oct 8, 2013, at 11:58 PM, Guang <yguang11@xxxxxxxxx> wrote:
>>> 
>>> Hi ceph-users,
>>> Ceph recommends the PGs number of a pool is (100 * OSDs) / Replicas, per my understanding, the number of PGs for a pool should be fixed even we scale out / in the cluster by adding / removing OSDs, does that mean if we double the OSD numbers, the PG number for a pool is not optimal any more and there is no chance to correct it?
>>> 
>>> 
>>> Thanks,
>>> Guang
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux