Re: ceph.conf mon_max_pg_per_osd not recognized / set

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



so, moving the entry from [mon] to [global] worked 
This is a bit confusing - I use to put all my configuration setting starting with mon_ under [mon] 

Steven

On Wed, 31 Oct 2018 at 10:13, Steven Vacaroaia <stef97@xxxxxxxxx> wrote:
I do not think so ..or maybe I did not understand what are you saying 
There is no key listed on mgr config 

ceph config-key list
[
    "config-history/1/",
    "config-history/2/",
    "config-history/2/+mgr/mgr/dashboard/server_addr",
    "config-history/3/",
    "config-history/3/+mgr/mgr/prometheus/server_addr",
    "config-history/4/",
    "config-history/4/+mgr/mgr/dashboard/username",
    "config-history/5/",
    "config-history/5/+mgr/mgr/dashboard/password",
    "config-history/6/",
    "config-history/6/+mgr/mgr/balancer/mode",
    "config-history/7/",
    "config-history/7/+mgr/mgr/balancer/active",
    "config-history/8/",
    "config-history/8/+mgr/mgr/dashboard/password",
    "config-history/8/-mgr/mgr/dashboard/password",
    "config/mgr/mgr/balancer/active",
    "config/mgr/mgr/balancer/mode",
    "config/mgr/mgr/dashboard/password",
    "config/mgr/mgr/dashboard/server_addr",
    "config/mgr/mgr/dashboard/username",
    "config/mgr/mgr/prometheus/server_addr",
    "mgr/dashboard/crt",
    "mgr/dashboard/key"


On Wed, 31 Oct 2018 at 09:59, <ceph@xxxxxxxxxxxxxx> wrote:
Isn't this a mgr variable ?

On 10/31/2018 02:49 PM, Steven Vacaroaia wrote:
> Hi,
>
> Any idea why different value for  mon_max_pg_per_osd is not "recognized" ?
> I am using mimic 13.2.2
>
> Here is what I have in /etc/ceph/ceph.conf
>
>
> [mon]
> mon_allow_pool_delete = true
> mon_osd_min_down_reporters = 1
> mon_max_pg_per_osd = 400
>
> checking the value with
> ceph daemon osd.6 config show| grep mon_max_pg_per_osd still shows the
> default ( 250)
>
>
> Injecting a different value appears to works
> ceph tell osd.* injectargs '--mon_max_pg_per_osd 500'
>
> ceph daemon osd.6 config show| grep mon_max_pg_per_osd
>     "mon_max_pg_per_osd": "500",
>
> BUT
>
> cluster is still complaining TOO_MANY_PGS too many PGs per OSD (262 >
> max 250)
>
> I have restarted ceph.target services on monitor/manager server
> What else has to be done to have the cluster using the new value ?
>
> Steven
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux