Re: radosgw dying

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



radosgw will try and create all if the default pools if they are missing. The number of pools changes depending on the version, but it's somewhere around 5.

On Sun, Jun 9, 2019, 1:00 PM <DHilsbos@xxxxxxxxxxxxxx> wrote:
Huan;

I get that, but the pool already exists, why is radosgw trying to create one?

Dominic Hilsbos





On Sat, Jun 8, 2019 at 2:55 AM -0700, "huang jun" <hjwsm1989@xxxxxxxxx> wrote:

From the error message, i'm decline to that 'mon_max_pg_per_osd' was exceed,
you can check the value of it, and its default value is 250, so you
can at most have 1500pgs(250*6osds),
and for replicated pools with size=3, you can have 500pgs for all pools,
you already have 448pgs, so the next pool can create at most 500-448=52pgs.

 于2019年6月8日周六 下午2:41写道:
>
> All;
>
> I have a test and demonstration cluster running (3 hosts, MON, MGR, 2x OSD per host), and I'm trying to add a 4th host for gateway purposes.
>
> The radosgw process keeps dying with:
> 2019-06-07 15:59:50.700 7fc4ef273780  0 ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable), process radosgw, pid 17588
> 2019-06-07 15:59:51.358 7fc4ef273780  0 rgw_init_ioctx ERROR: librados::Rados::pool_create returned (34) Numerical result out of range (this can be due to a pool or placement group misconfiguration, e.g. pg_num < pgp_num or mon_max_pg_per_osd exceeded)
> 2019-06-07 15:59:51.396 7fc4ef273780 -1 Couldn't init storage provider (RADOS)
>
> The .rgw.root pool already exists.
>
> ceph status returns:
>   cluster:
>     id:     1a8a1693-fa54-4cb3-89d2-7951d4cee6a3
>     health: HEALTH_OK
>
>   services:
>     mon: 3 daemons, quorum S700028,S700029,S700030 (age 30m)
>     mgr: S700028(active, since 47h), standbys: S700030, S700029
>     osd: 6 osds: 6 up (since 2d), 6 in (since 3d)
>
>   data:
>     pools:   5 pools, 448 pgs
>     objects: 12 objects, 1.2 KiB
>     usage:   722 GiB used, 65 TiB / 66 TiB avail
>     pgs:     448 active+clean
>
> and ceph osd tree returns:
> ID CLASS WEIGHT   TYPE NAME        STATUS REWEIGHT PRI-AFF
> -1       66.17697 root default
> -5       22.05899     host S700029
>  2   hdd 11.02950         osd.2        up  1.00000 1.00000
>  3   hdd 11.02950         osd.3        up  1.00000 1.00000
> -7       22.05899     host S700030
>  4   hdd 11.02950         osd.4        up  1.00000 1.00000
>  5   hdd 11.02950         osd.5        up  1.00000 1.00000
> -3       22.05899     host s700028
>  0   hdd 11.02950         osd.0        up  1.00000 1.00000
>  1   hdd 11.02950         osd.1        up  1.00000 1.00000
>
> Any thoughts on what I'm missing?
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Director - Information Technology
> Perform Air International Inc.
> DHilsbos@xxxxxxxxxxxxxx
> www.PerformAir.com
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Thank you!
HuangJun
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux