Re: RGW error Coundn't init storage provider (RADOS)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

I facing similar error a couple of days ago:
radosgw-admin --cluster=cl00 realm create --rgw-realm=data00 --default
...
(0 rgw main: rgw_init_ioctx ERROR: librados::Rados::pool_create returned
(34) Numerical result out of range (this can be due to a pool or placement
group misconfiguration, e.g. pg_num < pgp_num or mon_max_pg_per_osd
exceeded)
...
obviously radosgw-admin unable to create pool .rgw.root (at the same time
"ceph pool create" works as expected)
Crowling on a mon logs with debug=20 leads to record:
"... prepare_new_pool got -34 'pgp_num' must be greater than 0 and lower or
equal than 'pg_num', which in this case is 1"
As for me pg_num=1 looks strange because default value of
osd_pool_default_pg_num=32.
On the other side default osd_pool_default_pgp_num=0 so I tried to set
osd_pool_default_pgp_num=1 and it worked:
pool .rgw.root was built.
 What really looks strange, after first success I can't reproduce it any
more.
After that "radosgw-admin ... realm create" successfully builds .rgw.root
even with osd_pool_default_pgp_num=0.Nevertheless I suspect a record
"pgp_num must be greater than 0 and lower or equal than 'pg_num', which in
this case is 1"
points to  existing bug. It looks like default values of
osd_pool_default_pg[p]_num somway ignored/omitted.


On Tue, Jul 19, 2022 at 9:11 AM Robert Reihs <robert.reihs@xxxxxxxxx> wrote:

> Yes, I checked pg_num, pgp_num and mon_max_pg_per_osd. I also setup a
> single node cluster with the same ansible script we have. Using cephadm for
> setting um and managing the cluster. I had the same problem on the new
> single node cluster without setup of any other services. When I created the
> pools manually the service started and also the dashboard connection
> directly worked.
>
> On Mon, Jul 18, 2022 at 10:20 AM Janne Johansson <icepic.dz@xxxxxxxxx>
> wrote:
>
> > No, rgw should have the ability to create its own pools. Check the caps
> on
> > tve keys used by the rgw daemon.
> >
> > Den mån 18 juli 2022 09:59Robert Reihs <robert.reihs@xxxxxxxxx> skrev:
> >
> >> Hi,
> >> I had to manually create the pools, than the service automatically
> started
> >> and is now available.
> >> pools:
> >> .rgw.root
> >> default.rgw.log
> >> default.rgw.control
> >> default.rgw.meta
> >> default.rgw.buckets.index
> >> default.rgw.buckets.data
> >> default.rgw.buckets.non-ec
> >>
> >> Is this normal behavior? Should then the error message be changed? Or is
> >> this a bug?
> >> Best
> >> Robert Reihs
> >>
> >>
> >> On Fri, Jul 15, 2022 at 3:47 PM Robert Reihs <robert.reihs@xxxxxxxxx>
> >> wrote:
> >>
> >> > Hi,
> >> > When I have no luck yet solving the issue, but I can add some
> >> > more information. The system pools ".rgw.root" and "default.rgw.log"
> are
> >> > not created. I have created them manually, Now there is more log
> >> activity,
> >> > but still getting the same error message in the log:
> >> > rgw main: rgw_init_ioctx ERROR: librados::Rados::pool_create returned
> >> (34)
> >> > Numerical result out of range (this can be due to a pool or placement
> >> group
> >> > misconfiguration, e.g. pg_num < pgp_num or mon_max_pg_per_osd
> exceeded)
> >> > I can't find the correct pool to create manually.
> >> > Thanks for any help
> >> > Best
> >> > Robert
> >> >
> >> > On Tue, Jul 12, 2022 at 5:22 PM Robert Reihs <robert.reihs@xxxxxxxxx>
> >> > wrote:
> >> >
> >> >> Hi,
> >> >>
> >> >> We have a problem with deloing radosgw vi cephadm. We have a Ceph
> >> cluster
> >> >> with 3 nodes deployed via cephadm. Pool creation, cephfs and block
> >> storage
> >> >> are working.
> >> >>
> >> >> ceph version 17.2.1 (ec95624474b1871a821a912b8c3af68f8f8e7aa1) quincy
> >> >> (stable)
> >> >>
> >> >> The service specs is like this for the rgw:
> >> >>
> >> >> ---
> >> >>
> >> >> service_type: rgw
> >> >>
> >> >> service_id: rgw
> >> >>
> >> >> placement:
> >> >>
> >> >>   count: 3
> >> >>
> >> >>   label: "rgw"
> >> >>
> >> >> ---
> >> >>
> >> >> service_type: ingress
> >> >>
> >> >> service_id: rgw.rgw
> >> >>
> >> >> placement:
> >> >>
> >> >>   count: 3
> >> >>
> >> >>   label: "ingress"
> >> >>
> >> >> spec:
> >> >>
> >> >>   backend_service: rgw.rgw
> >> >>
> >> >>   virtual_ip: [IPV6]
> >> >>
> >> >>   virtual_interface_networks: [IPV6 CIDR]
> >> >>
> >> >>   frontend_port: 8080
> >> >>
> >> >>   monitor_port: 1967
> >> >>
> >> >> The error I get in the logfiles:
> >> >>
> >> >> 0 deferred set uid:gid to 167:167 (ceph:ceph)
> >> >>
> >> >> 0 ceph version 17.2.1 (ec95624474b1871a821a912b8c3af68f8f8e7aa1)
> quincy
> >> >> (stable), process radosgw, pid 2
> >> >>
> >> >> 0 framework: beast
> >> >>
> >> >> 0 framework conf key: port, val: 80
> >> >>
> >> >> 1 radosgw_Main not setting numa affinity
> >> >>
> >> >> 1 rgw_d3n: rgw_d3n_l1_local_datacache_enabled=0
> >> >>
> >> >> 1 D3N datacache enabled: 0
> >> >>
> >> >> 0 rgw main: rgw_init_ioctx ERROR: librados::Rados::pool_create
> returned
> >> >> (34) Numerical result out of range (this can be due to a pool or
> >> placement
> >> >> group misconfiguration, e.g. pg_num < pgp_num or mon_max_pg_per_osd
> >> >> exceeded)
> >> >>
> >> >> 0 rgw main: failed reading realm info: ret -34 (34) Numerical result
> >> out
> >> >> of range
> >> >>
> >> >> 0 rgw main: ERROR: failed to start notify service ((34) Numerical
> >> result
> >> >> out of range
> >> >>
> >> >> 0 rgw main: ERROR: failed to init services (ret=(34) Numerical result
> >> out
> >> >> of range)
> >> >>
> >> >> -1 Couldn't init storage provider (RADOS)
> >> >>
> >> >> I have for testing set the pg_num and pgp_num to 16 and the
> >> >> mon_max_pg_per_osd to 1000 and still getting the same error. I have
> >> also
> >> >> tried creating the rgw with ceph command, same error. Pool creation
> is
> >> >> working, I created multiple other pools and there was no problem.
> >> >>
> >> >> Thanks for any help.
> >> >>
> >> >> Best
> >> >>
> >> >> Robert
> >> >>
> >> >> The 5 fails services are 3 from the rgw and 2 haproxy for the rgw,
> >> there
> >> >> is only one running:
> >> >>
> >> >> ceph -s
> >> >>
> >> >>   cluster:
> >> >>
> >> >>     id:     40ddf
> >> >>
> >> >>     health: HEALTH_WARN
> >> >>
> >> >>             5 failed cephadm daemon(s)
> >> >>
> >> >>
> >> >>
> >> >>   services:
> >> >>
> >> >>     mon: 3 daemons, quorum ceph-01,ceph-02,ceph-03 (age 4d)
> >> >>
> >> >>     mgr: ceph-01.hbvyqi(active, since 4d), standbys: ceph-02.pqtxbv
> >> >>
> >> >>     mds: 1/1 daemons up, 3 standby
> >> >>
> >> >>     osd: 6 osds: 6 up (since 4d), 6 in (since 4d)
> >> >>
> >> >>
> >> >>
> >> >>   data:
> >> >>
> >> >>     volumes: 1/1 healthy
> >> >>
> >> >>     pools:   5 pools, 65 pgs
> >> >>
> >> >>     objects: 87 objects, 170 MiB
> >> >>
> >> >>     usage:   1.4 GiB used, 19 TiB / 19 TiB avail
> >> >>
> >> >>     pgs:     65 active+clean
> >> >>
> >> >>
> >> >
> >> > --
> >> > Robert Reihs
> >> > Jakobsweg 22
> >> > 8046 Stattegg
> >> > AUSTRIA
> >> >
> >> > mobile: +43 (664) 51 035 90
> >> > robert.reihs@xxxxxxxxx
> >> >
> >>
> >>
> >> --
> >> Robert Reihs
> >> Jakobsweg 22
> >> 8046 Stattegg
> >> AUSTRIA
> >>
> >> mobile: +43 (664) 51 035 90
> >> robert.reihs@xxxxxxxxx
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
> >
>
> --
> Robert Reihs
> Jakobsweg 22
> 8046 Stattegg
> AUSTRIA
>
> mobile: +43 (664) 51 035 90
> robert.reihs@xxxxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>


-- 
Best regards.
       Alexander Y. Fomichev <git.user@xxxxxxxxx>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux