Just remember that the warning appears at > 300 PGs/OSD, but the recommendation is 100. I would try to reduce your PGs by 1/3 or as close as you can to that. My learning cluster I had to migrate data between pools multiple times reducing the number of PGs as I went until I got to a more normal amount. It affected the clients a fair bit, but that cluster is still a 3 node cluster in active use.
Note that the data movements were rsync, dd, etc for rbds and cephfs.
On Tue, Oct 3, 2017, 8:54 AM Andrei Mikhailovsky <andrei@xxxxxxxxxx> wrote:
Thanks for your suggestions and helpAndreiFrom: "David Turner" <drakonstein@xxxxxxxxx>
To: "Jack" <ceph@xxxxxxxxxxxxxx>, "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Sent: Monday, 2 October, 2017 22:28:33
Subject: Re: decreasing number of PGsAdding more OSDs or deleting/recreating pools that have too many PGs are your only 2 options to reduce the number of PG's per OSD. It is on the Ceph roadmap, but is not a currently supported feature. You can alternatively adjust the setting threshold for the warning, but it is still a problem you should address in your cluster.On Mon, Oct 2, 2017 at 4:02 PM Jack <ceph@xxxxxxxxxxxxxx> wrote:You cannot;
On 02/10/2017 21:43, Andrei Mikhailovsky wrote:
> Hello everyone,
>
> what is the safest way to decrease the number of PGs in the cluster. Currently, I have too many per osd.
>
> Thanks
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com