ceph radosgw failed to initialize

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The radosgw can't start normally, the error in log file:
-------------------------------
2019-12-20 14:37:04.058 7fd5b088f700 -1 Initialization timeout, failed
to initialize
2019-12-20 14:37:04.304 7fe7148c0780  0 deferred set uid:gid to
167:167 (ceph:ceph)
2019-12-20 14:37:04.304 7fe7148c0780  0 ceph version 14.2.5
(ad5bd132e1492173c85fda2cc863152730b16a92) nautilus (stable), process
radosgw, pid 3474
2019-12-20 14:37:04.333 7fe6fe47e700 20 reqs_thread_entry: start
2019-12-20 14:37:04.338 7fe7148c0780  1 librados: starting msgr at
2019-12-20 14:37:04.338 7fe7148c0780  1 librados: starting objecter
2019-12-20 14:37:04.338 7fe7148c0780  1 librados: setting wanted keys
2019-12-20 14:37:04.338 7fe7148c0780  1 librados: calling monclient init
2019-12-20 14:37:04.340 7fe7148c0780  1 librados: init done
2019-12-20 14:37:04.340 7fe7148c0780 20 rados->read ofs=0 len=0
2019-12-20 14:37:04.340 7fe7148c0780 10 librados: wait_for_osdmap waiting
2019-12-20 14:37:04.341 7fe7148c0780 10 librados: wait_for_osdmap done waiting
2019-12-20 14:37:04.341 7fe7148c0780 10 librados: read oid=default.realm nspace=
2019-12-20 14:42:04.304 7fe700f0d700 -1 Initialization timeout, failed
to initialize
-------------------------------

The config I used:
-------------------------------
[client.rgw.ceph-test-f01]
rgw_host = ceph-test-f01
keyring = /var/lib/ceph/radosgw/ceph-rgw.ceph-test-f01/keyring
rgw_frontends = civetweb port=8099
-------------------------------

The env I set as follows:

-------------------------------
# cat /etc/redhat-release
CentOS Linux release 7.7.1908 (Core)

# ceph -v
ceph version 14.2.5 (ad5bd132e1492173c85fda2cc863152730b16a92) nautilus (stable)

# ceph -s
cluster:
    id:     ac018df0-2e71-4c0f-a8a1-0ea29b8a7eb1
    health: HEALTH_WARN
            Reduced data availability: 8 pgs inactive
            Degraded data redundancy: 8 pgs undersized
  services:
    mon: 1 daemons, quorum ceph-test-f01 (age 2h)
    mgr: ceph-test-f01(active, since 61m)
    osd: 1 osds: 1 up (since 2h), 1 in (since 2h)
  data:
    pools:   2 pools, 40 pgs
    objects: 1 objects, 1.6 KiB
    usage:   1.0 GiB used, 98 GiB / 99 GiB avail
    pgs:     20.000% pgs not active
             32 active+clean
             8  undersized+peered

# ceph osd pool ls
.rgw.root
panama
-------------------------------

I also tried to follow the ceph document to create the default pools such as
.default.rgw.control
.default.rgw.gc
.default.rgw.buckets
.default.rgw.buckets.index
.default.rgw.buckets.extra
.default.log
.default.intent-log
.default.usage
.default.users
.default.users.email
.default.users.swift
.default.users.default.uid

The radosgw service still can't start normally, any advices will be appreciated.

And all the radosgw-admin utilities hang like:

# radosgw-admin zone get
2019-12-20 15:29:03.322 7f18c65876c0  1 librados: starting msgr at
2019-12-20 15:29:03.322 7f18c65876c0  1 librados: starting objecter
2019-12-20 15:29:03.323 7f18c65876c0  1 librados: setting wanted keys
2019-12-20 15:29:03.323 7f18c65876c0  1 librados: calling monclient init
2019-12-20 15:29:03.325 7f18c65876c0  1 librados: init done
2019-12-20 15:29:03.331 7f18c65876c0  1 librados: starting msgr at
2019-12-20 15:29:03.331 7f18c65876c0  1 librados: starting objecter
2019-12-20 15:29:03.331 7f18c65876c0  1 librados: setting wanted keys
2019-12-20 15:29:03.331 7f18c65876c0  1 librados: calling monclient init
2019-12-20 15:29:03.333 7f18c65876c0  1 librados: init done
2019-12-20 15:29:03.334 7f188ffff700  2
RGWDataChangesLog::ChangesRenewThread: start
2019-12-20 15:29:03.334 7f18c65876c0 20 rados->read ofs=0 len=0
2019-12-20 15:29:03.334 7f188f7fe700 20 reqs_thread_entry: start
2019-12-20 15:29:03.335 7f18c65876c0 10 librados: read oid=default.realm nspace=
2019-12-20 15:29:25.335 7f188ffff700  2
RGWDataChangesLog::ChangesRenewThread: start
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux