Updating the pg and pgp values

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



While checking the health of the cluster, I ran to the following error:

warning: health HEALTH_WARN too few pgs per osd (1< min 20)

When I checked the pg and php numbers, I saw the value was the default value of 64

ceph osd pool get data pg_num
pg_num: 64
ceph osd pool get data pgp_num
pgp_num: 64

Checking the ceph documents, I updated the numbers to 2000 using the following commands:

ceph osd pool set data pg_num 2000
ceph osd pool set data pgp_num 2000

It started resizing the data and saw health warnings again:

health HEALTH_WARN 1 requests are blocked > 32 sec; pool data pg_num 2000 > pgp_num 64

and then:

ceph health detail
HEALTH_WARN 6 requests are blocked > 32 sec; 3 osds have slow requests
5 ops are blocked > 65.536 sec
1 ops are blocked > 32.768 sec
1 ops are blocked > 32.768 sec on osd.16
1 ops are blocked > 65.536 sec on osd.77
4 ops are blocked > 65.536 sec on osd.98
3 osds have slow requests

This error also went away after a day.

ceph health detail
HEALTH_OK


Now, the question I have is, will this pg number remain effective on the cluster, even if we restart MON or OSD?s on the individual disks?  I haven?t changed the values in /etc/ceph/ceph.conf. Do I need to make a change to the ceph.conf and push that change to all the MON, MSD and OSD?s ?


Thanks.

?Jiten


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140908/77e0d725/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux