Re: Full OSD questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Sep 22, 2013 at 5:25 AM, Gaylord Holder <gholder@xxxxxxxxxxxxx> wrote:
>
>
> On 09/22/2013 02:12 AM, yy-nm wrote:
>>
>> On 2013/9/10 6:38, Gaylord Holder wrote:
>>>
>>> Indeed, that pool was created with the default 8 pg_nums.
>>>
>>> 8 pg_num * 2T/OSD / 2 repl ~ 8TB which about how far I got.
>>>
>>> I bumped up the pg_num to 600 for that pool and nothing happened.
>>> I bumped up the pgp_num to 600 for that pool and ceph started shifting
>>> things around.
>>>
>>> Can you explain the difference between pg_num and pgp_num to me?
>>> I can't understand the distinction.
>>>
>>> Thank you for your help!
>>>
>>> -Gaylord
>>>
>>> On 09/09/2013 04:58 PM, Samuel Just wrote:
>>>>
>>>> This is usually caused by having too few pgs.  Each pool with a
>>>> significant amount of data needs at least around 100pgs/osd.
>>>> -Sam
>>>>
>>>> On Mon, Sep 9, 2013 at 10:32 AM, Gaylord Holder
>>>> <gholder@xxxxxxxxxxxxx> wrote:
>>>>>
>>>>> I'm starting to load up my ceph cluster.
>>>>>
>>>>> I currently have 12 2TB drives (10 up and in, 2 defined but down and
>>>>> out).
>>>>>
>>>>> rados df
>>>>>
>>>>> says I have 8TB free, but I have 2 nearly full OSDs.
>>>>>
>>>>> I don't understand how/why these two disks are filled while the
>>>>> others are
>>>>> relatively empty.
>>>>>
>>>>> How do I tell ceph to spread the data around more, and why isn't it
>>>>> already
>>>>> doing it?
>>>>>
>>>>> Thank you for helping me understand this system better.
>>>>>
>>>>> Cheers,
>>>>> -Gaylord
>>>>> _______________________________________________
>>>>> ceph-users mailing list
>>>>> ceph-users@xxxxxxxxxxxxxx
>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>> well, pg_num as the total num of pgs, and pgp_num means the num of pgs
>> which are used now
>
>
> The reference
>
>
>> you can reference on
>> http://ceph.com/docs/master/rados/operations/pools/#create-a-pool
>> the description of pgp_num
>
>
> simply says pgp_num is:
>
>> The total number of placement groups for placement purposes.
>
> Why is the number of placement groups different from the number of placement
> groups for placement purposes?
>
> When would you want them to be different?
>
> Thank you for helping me understand this.

This is for supporting the PG split/merge functionality (only split is
implemented right now). You can split your PGs in half in one stage
(but keep them located together to reduce the number of map overrides
required) and then let them rebalance across the cluster separately.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux