Updating the pg and pgp values

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks. How do I query the OSDMap on monitors? 

Using "ceph osd pool get data pg? ? or is there a way to get the full list of settings?

?jiten


On Sep 8, 2014, at 10:52 AM, Gregory Farnum <greg at inktank.com> wrote:

> It's stored in the OSDMap on the monitors.
> Software Engineer #42 @ http://inktank.com | http://ceph.com
> 
> 
> On Mon, Sep 8, 2014 at 10:50 AM, JIten Shah <jshah2005 at me.com> wrote:
>> So, if it doesn?t refer to the entry in ceph.conf. Where does it actually store the new value?
>> 
>> ?Jiten
>> 
>> On Sep 8, 2014, at 10:31 AM, Gregory Farnum <greg at inktank.com> wrote:
>> 
>>> On Mon, Sep 8, 2014 at 10:08 AM, JIten Shah <jshah2005 at me.com> wrote:
>>>> While checking the health of the cluster, I ran to the following error:
>>>> 
>>>> warning: health HEALTH_WARN too few pgs per osd (1< min 20)
>>>> 
>>>> When I checked the pg and php numbers, I saw the value was the default value
>>>> of 64
>>>> 
>>>> ceph osd pool get data pg_num
>>>> pg_num: 64
>>>> ceph osd pool get data pgp_num
>>>> pgp_num: 64
>>>> 
>>>> Checking the ceph documents, I updated the numbers to 2000 using the
>>>> following commands:
>>>> 
>>>> ceph osd pool set data pg_num 2000
>>>> ceph osd pool set data pgp_num 2000
>>>> 
>>>> It started resizing the data and saw health warnings again:
>>>> 
>>>> health HEALTH_WARN 1 requests are blocked > 32 sec; pool data pg_num 2000 >
>>>> pgp_num 64
>>>> 
>>>> and then:
>>>> 
>>>> ceph health detail
>>>> HEALTH_WARN 6 requests are blocked > 32 sec; 3 osds have slow requests
>>>> 5 ops are blocked > 65.536 sec
>>>> 1 ops are blocked > 32.768 sec
>>>> 1 ops are blocked > 32.768 sec on osd.16
>>>> 1 ops are blocked > 65.536 sec on osd.77
>>>> 4 ops are blocked > 65.536 sec on osd.98
>>>> 3 osds have slow requests
>>>> 
>>>> This error also went away after a day.
>>>> 
>>>> ceph health detail
>>>> HEALTH_OK
>>>> 
>>>> 
>>>> Now, the question I have is, will this pg number remain effective on the
>>>> cluster, even if we restart MON or OSD?s on the individual disks?  I haven?t
>>>> changed the values in /etc/ceph/ceph.conf. Do I need to make a change to the
>>>> ceph.conf and push that change to all the MON, MSD and OSD?s ?
>>> 
>>> It's durable once the commands are successful on the monitors. You're all done.
>>> -Greg
>>> Software Engineer #42 @ http://inktank.com | http://ceph.com
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users at lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140908/941bcb78/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux