Re: Decrease the pgs number in cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



the main problem is not the pg_num,but some other problem about your network or ceph service AFAIK.

can your parse ceph -s ,ceph osd tree, cat ceph.conf ?

2016-05-23 11:52 GMT+08:00 Albert Archer <albertarcher94@xxxxxxxxx>:
So, there is no solution at all ?

On Sun, May 22, 2016 at 7:01 PM, Albert Archer <albertarcher94@xxxxxxxxx> wrote:
Hello All.

Determining the number of pgs and pgps is the very hard job (of course for newbies like me ).
The problem is, when we set the number of pgs and pgps for creating a pool, it seems there is no way to decrease the pgs for that pool.

I configured 9 OSDs hosts (virtual machine in VMware ESXI ), in UP and IN state. and approximately 1700 pgs for rbd and another pool .

but :
~1536 stale+active+clean
~250  active + clean

so, how can i remove some pgs ???? or come back to ~1700 active+clean state ???

what is the problem ???

Regards
Albert


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux