Radosgw startup failures & misdirected client requests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We've had a hammer (0.94.1) (virtual) 3 node/3 osd cluster with radosgws
failing to start, failing continously with the following error:

--8<---------------cut here---------------start------------->8---
2015-05-06 04:40:38.815545 7f3ef9046840  0 ceph version 0.94.1 (e4bfad3a3c51054df7e537a724c8d0bf9be972ff), process radosgw, pid 32580
2015-05-06 04:40:38.834785 7f3ef9046840 -1 failed reading default region info: (6) No such device or address
2015-05-06 04:40:38.839987 7f3ef9046840 -1 Couldn't init storage provider (RADOS)
--8<---------------cut here---------------end--------------->8---

The ceph logs at the same moment reveal the following:

--8<---------------cut here---------------start------------->8---
2015-05-06 04:39:56.287522 mon.0 X.X.X.6:6789/0 304 : cluster [INF] HEALTH_OK
2015-05-06 04:40:03.721690 osd.0 X.X.X.10:6800/26741 61 : cluster [WRN] client.14340 X.X.X.7:0/1029229 misdirected client.14340.0:1 pg 4.9a566808 to osd.0 not [2,1,0] in e17/17
2015-05-06 04:40:38.834364 osd.0 X.X.X.10:6800/26741 62 : cluster [WRN] client.24328 X.X.X.10:0/1032583 misdirected client.24328.0:1 pg 4.9a566808 to osd.0 not [2,1,0] in e17/17
2015-05-06 04:40:53.521053 osd.0 X.X.X.10:6800/26741 63 : cluster [WRN] client.24337 X.X.X.6:0/1030829 misdirected client.24337.0:1 pg 4.9a566808 to osd.0 not [2,1,0] in e17/17
2015-05-06 04:41:01.949213 osd.0 X.X.X.10:6800/26741 64 : cluster [WRN] client.24346 X.X.X.10:0/1001510 misdirected client.24346.0:1 pg4.9a566808 to osd.0 not [2,1,0] in e17/17
--8<---------------cut here---------------end--------------->8---

Radosgw was failing forever with the same log, with the ceph cluster
showing similiar logs as above. On looking at the pools created by
radosgw only `.rgw.root` pool could be found with no objects inside the
pool. Pools were not manually assigned to rgw, so rgw should create
this at the first time, so looking back at the beginning of radosgw's
logs, during the first initialization cycle we see:

--8<---------------cut here---------------start------------->8---
2015-05-06 04:35:43.355111 7f5214bab840  0 ceph version 0.94.1 (e4bfad3a3c51054df7e537a724c8d0bf9be972ff), process radosgw, pid 24495
2015-05-06 04:37:00.084655 7f5214bab840  0 couldn't find old data placement pools config, setting up new ones for the zone
2015-05-06 04:37:00.091554 7f5214bab840 -1 error storing zone params: (6) No such device or address
2015-05-06 04:37:00.095111 7f5214bab840 -1 Couldn't init storage provider (RADOS)
--8<---------------cut here---------------end--------------->8---

At this point the ceph cluster was still at 2 osds; the ceph logs as
below:

--8<---------------cut here---------------start------------->8---
2015-05-06 04:35:53.971872 mon.0 X.X.X.6:6789/0 145 : cluster [INF] osdmap e10: 2 osds: 1 up, 1 in
2015-05-06 04:35:54.231036 mon.0 X.X.X.6:6789/0 146 : cluster [INF] pgmap v23: 120 pgs: 120 undersized+degraded+peered; 0 bytes data, 4129 MB used, 4610 MB / 8740 MB avail
2015-05-06 04:35:57.844222 mon.0 X.X.X.6:6789/0 148 : cluster [INF] pgmap v24: 120 pgs: 120 undersized+degraded+peered; 0 bytes data, 4129 MB used, 4610 MB / 8740 MB avail
2015-05-06 04:36:14.237299 osd.0 X.X.X.10:6800/26741 47 : cluster [WRN] 6 slow requests, 1 included below; oldest blocked for > 1128.060992 secs
2015-05-06 04:36:14.237310 osd.0 X.X.X.10:6800/26741 48 : cluster [WRN] slow request 30.869139 seconds old, received at 2015-05-06 04:35:43.368075: osd_op(client.6515.0:1 default.region [getxattrs,stat] 4.9a566808 ack+read+known_if_redirected e9) currently reached_pg
2015-05-06 04:36:20.142985 mon.0 X.X.X.6:6789/0 165 : cluster [INF] osdmap e11: 2 osds: 1 up, 1 in
2015-05-06 04:36:20.249438 mon.0 X.X.X.6:6789/0 166 : cluster [INF] pgmap v25: 120 pgs: 120 undersized+degraded+peered; 0 bytes data, 4129 MB used, 4610 MB / 8740 MB avail
2015-05-06 04:36:21.437516 mon.0 X.X.X.6:6789/0 167 : cluster [INF] osd.1 X.X.X.6:6800/25630 boot
2015-05-06 04:36:21.465908 mon.0 X.X.X.6:6789/0 168 : cluster [INF] osdmap e12: 2 osds: 2 up, 2 in
2015-05-06 04:36:21.551237 mon.0 X.X.X.6:6789/0 169 : cluster [INF] pgmap v26: 120 pgs: 120 undersized+degraded+peered; 0 bytes data, 4129 MB used, 4610 MB / 8740 MB avail
2015-05-06 04:36:22.752684 mon.0 X.X.X.6:6789/0 170 : cluster [INF] osdmap e13: 2 osds: 2 up, 2 in
2015-05-06 04:36:22.845573 mon.0 X.X.X.6:6789/0 171 : cluster [INF] pgmap v27: 120 pgs: 120 undersized+degraded+peered; 0 bytes data, 4129 MB used, 4610 MB / 8740 MB avail
2015-05-06 04:36:15.237727 osd.0 X.X.X.10:6800/26741 49 : cluster [WRN] 7 slow requests, 1 included below; oldest blocked for > 1129.061337 secs
2015-05-06 04:36:15.237752 osd.0 X.X.X.10:6800/26741 50 : cluster [WRN] slow request 30.746163 seconds old, received at 2015-05-06 04:35:44.491396: osd_op(client.6524.0:1 default.region [getxattrs,stat] 4.9a566808 ack+read+known_if_redirected e9) currently reached_pg
2015-05-06 04:36:27.497253 mon.0 X.X.X.6:6789/0 174 : cluster [INF] pgmap v28: 120 pgs: 29 creating+activating+undersized+degraded, 33 creating+peering, 58 undersized+degraded+peered; 0 bytes data, 4129 MB used, 4610 MB / 8740 MB avail
2015-05-06 04:36:36.581344 mon.0 X.X.X.6:6789/0 175 : cluster [INF] pgmap v29: 120 pgs: 82 creating+activating+undersized+degraded, 33 creating+peering, 5 undersized+degraded+peered; 0 bytes data, 8258 MB used, 9221 MB / 17480 MB avail
2015-05-06 04:36:37.898349 mon.0 X.X.X.6:6789/0 176 : cluster [INF] pgmap v30: 120 pgs: 115 creating+activating+undersized+degraded, 5 undersized+degraded+peered; 0 bytes data, 8258 MB used, 9221 MB / 17480 MB avail
2015-05-06 04:36:42.080151 mon.0 X.X.X.6:6789/0 177 : cluster [INF] pgmap v31: 120 pgs: 120 creating+activating+undersized+degraded; 0 bytes data, 8258 MB used, 9221 MB / 17480 MB avail
2015-05-06 04:36:44.245929 osd.0 X.X.X.10:6800/26741 51 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 60.877785 secs
2015-05-06 04:36:44.245966 osd.0 X.X.X.10:6800/26741 52 : cluster [WRN] slow request 60.877785 seconds old, received at 2015-05-06 04:35:43.368075: osd_op(client.6515.0:1 default.region [getxattrs,stat] 4.9a566808 ack+read+known_if_redirected e9) currently reached_pg
2015-05-06 04:36:47.633263 mon.0 X.X.X.6:6789/0 178 : cluster [INF] pgmap v32: 120 pgs: 10 active+undersized+degraded, 110 creating+activating+undersized+degraded; 0 bytes data, 8259 MB used, 9220 MB / 17480 MB avail
2015-05-06 04:36:49.936579 mon.0 X.X.X.6:6789/0 179 : cluster [INF] pgmap v33: 120 pgs: 52 active+undersized+degraded, 68 creating+activating+undersized+degraded; 0 bytes data, 8259 MB used, 9220 MB / 17480 MB avail
2015-05-06 04:36:52.514385 mon.0 X.X.X.6:6789/0 180 : cluster [INF] pgmap v34: 120 pgs: 68 active+undersized+degraded, 52 creating+activating+undersized+degraded; 0 bytes data, 8259 MB used, 9220 MB / 17480 MB avail
2015-05-06 04:36:51.247764 osd.0 X.X.X.10:6800/26741 53 : cluster [WRN] 2 slow requests, 1 included below; oldest blocked for > 67.879621 secs
2015-05-06 04:36:51.247774 osd.0 X.X.X.10:6800/26741 54 : cluster [WRN] slow request 30.177047 seconds old, received at 2015-05-06 04:36:21.070649: osd_op(client.24148.0:1 default.region [getxattrs,stat] 4.9a566808 ack+read+known_if_redirected e11) currently reached_pg
2015-05-06 04:36:56.286598 mon.0 X.X.X.6:6789/0 181 : cluster [INF] HEALTH_WARN; 120 pgs degraded; 52 pgs stuck inactive; 120 pgs stuck unclean; 120 pgs undersized; 2 requests are blocked > 32 sec
2015-05-06 04:36:56.964127 mon.0 X.X.X.6:6789/0 182 : cluster [INF] pgmap v35: 120 pgs: 74 active+undersized+degraded, 46 creating+activating+undersized+degraded; 0 bytes data, 8259 MB used, 9220 MB / 17480 MB avail
2015-05-06 04:36:58.232024 mon.0 X.X.X.6:6789/0 183 : cluster [INF] pgmap v36: 120 pgs: 88 active+undersized+degraded, 32 creating+activating+undersized+degraded; 0 bytes data, 8259 MB used, 9220 MB / 17480 MB avail
2015-05-06 04:37:01.806001 mon.0 X.X.X.6:6789/0 184 : cluster [INF] pgmap v37: 120 pgs: 92 active+undersized+degraded, 28 creating+activating+undersized+degraded; 0 bytes data, 8259 MB used, 9220 MB / 17480 MB avail
2015-05-06 04:37:00.089515 osd.1 X.X.X.6:6800/25630 1 : cluster [WRN] client.24148 X.X.X.6:0/1025777 misdirected client.24148.0:2 pg 4.e01f9ae to osd.1 not [0,1] in e13/13
2015-05-06 04:37:00.091454 osd.1 X.X.X.6:6800/25630 2 : cluster [WRN] client.6515 X.X.X.10:0/1024498 misdirected client.6515.0:2 pg 4.e01f9ae to osd.1 not [0,1] in e13/13
--8<---------------cut here---------------end--------------->8---

Going by the logs it looks like the client (rgw) is trying to contact
the non primary pg in an acting set, and this problem persisted even
when ceph cluster become at a healthy + active + clean state.

Ultimately in this case the problem was solved by manually removing the
`.rgw.root` pool and restarting radosgw which created the pools again,
after which everything works as expected. We are seeing this issue
intermittently in both firefly & hammer in our test scenarios, where
radosgw is started at as soon as ceph is deployed (without waiting for
the cluster to become healthy). Any ideas on the cause & possible
solutions to this problem?

-- 
Abhishek
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 489 bytes
Desc: not available
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20150512/16a1663f/attachment.pgp>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux