Re: why set pg_num do not update pgp_num

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 10/19/18 7:51 AM, xiang.dai@xxxxxxxxxxx wrote:
> Hi!
> 
> I use ceph 13.2.1 (5533ecdc0fda920179d7ad84e0aa65a127b20d77) mimic
> (stable), and find that:
> 
> When expand whole cluster, i update pg_num, all succeed, but the status
> is as below:
>   cluster:
>     id:     41ef913c-2351-4794-b9ac-dd340e3fbc75
>     health: HEALTH_WARN
>             3 pools have pg_num > pgp_num
> 
> Then i update pgp_num too, warning miss.
> 
> What makes me confused is that when i create whole cluster at first time,
> i use "ceph osd create pool pool_name pg_num", the pgp_num is auto equal
> to pg_num.
> 
> But "ceph osd set pool pool_name pg_num" not.
> 
> Why does this design?
> 
> Why do not auto update pgp_num when update pg_num?
> 

Because when changing pg_num only the Placement Groups are created, data
isn't moving yet. pgp_num, Placement Groups for Placement influences how
CRUSH works.

When you change that value data actually  starts to move.

pgp_num can never be larger then pg_num though.

Some people choose to increase pgp_num in small steps so that the data
migration isn't massive.

Wido

> Thanks
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux