Re: Radosgw startup failures & misdirected client requests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 12, 2015 at 9:13 PM, Abhishek L
<abhishek.lekshmanan@xxxxxxxxx> wrote:
>
> We've had a hammer (0.94.1) (virtual) 3 node/3 osd cluster with radosgws
> failing to start, failing continously with the following error:
>
> --8<---------------cut here---------------start------------->8---
> 2015-05-06 04:40:38.815545 7f3ef9046840  0 ceph version 0.94.1 (e4bfad3a3c51054df7e537a724c8d0bf9be972ff), process radosgw, pid 32580
> 2015-05-06 04:40:38.834785 7f3ef9046840 -1 failed reading default region info: (6) No such device or address
> 2015-05-06 04:40:38.839987 7f3ef9046840 -1 Couldn't init storage provider (RADOS)
> --8<---------------cut here---------------end--------------->8---
>
> The ceph logs at the same moment reveal the following:
>
> --8<---------------cut here---------------start------------->8---
> 2015-05-06 04:39:56.287522 mon.0 X.X.X.6:6789/0 304 : cluster [INF] HEALTH_OK
> 2015-05-06 04:40:03.721690 osd.0 X.X.X.10:6800/26741 61 : cluster [WRN] client.14340 X.X.X.7:0/1029229 misdirected client.14340.0:1 pg 4.9a566808 to osd.0 not [2,1,0] in e17/17
> 2015-05-06 04:40:38.834364 osd.0 X.X.X.10:6800/26741 62 : cluster [WRN] client.24328 X.X.X.10:0/1032583 misdirected client.24328.0:1 pg 4.9a566808 to osd.0 not [2,1,0] in e17/17
> 2015-05-06 04:40:53.521053 osd.0 X.X.X.10:6800/26741 63 : cluster [WRN] client.24337 X.X.X.6:0/1030829 misdirected client.24337.0:1 pg 4.9a566808 to osd.0 not [2,1,0] in e17/17
> 2015-05-06 04:41:01.949213 osd.0 X.X.X.10:6800/26741 64 : cluster [WRN] client.24346 X.X.X.10:0/1001510 misdirected client.24346.0:1 pg4.9a566808 to osd.0 not [2,1,0] in e17/17
> --8<---------------cut here---------------end--------------->8---
>
> Radosgw was failing forever with the same log, with the ceph cluster
> showing similiar logs as above. On looking at the pools created by
> radosgw only `.rgw.root` pool could be found with no objects inside the
> pool. Pools were not manually assigned to rgw, so rgw should create
> this at the first time, so looking back at the beginning of radosgw's
> logs, during the first initialization cycle we see:
>
> --8<---------------cut here---------------start------------->8---
> 2015-05-06 04:35:43.355111 7f5214bab840  0 ceph version 0.94.1 (e4bfad3a3c51054df7e537a724c8d0bf9be972ff), process radosgw, pid 24495
> 2015-05-06 04:37:00.084655 7f5214bab840  0 couldn't find old data placement pools config, setting up new ones for the zone
> 2015-05-06 04:37:00.091554 7f5214bab840 -1 error storing zone params: (6) No such device or address
> 2015-05-06 04:37:00.095111 7f5214bab840 -1 Couldn't init storage provider (RADOS)
> --8<---------------cut here---------------end--------------->8---
>
> At this point the ceph cluster was still at 2 osds; the ceph logs as
> below:
>
> --8<---------------cut here---------------start------------->8---
> 2015-05-06 04:35:53.971872 mon.0 X.X.X.6:6789/0 145 : cluster [INF] osdmap e10: 2 osds: 1 up, 1 in
> 2015-05-06 04:35:54.231036 mon.0 X.X.X.6:6789/0 146 : cluster [INF] pgmap v23: 120 pgs: 120 undersized+degraded+peered; 0 bytes data, 4129 MB used, 4610 MB / 8740 MB avail
> 2015-05-06 04:35:57.844222 mon.0 X.X.X.6:6789/0 148 : cluster [INF] pgmap v24: 120 pgs: 120 undersized+degraded+peered; 0 bytes data, 4129 MB used, 4610 MB / 8740 MB avail
> 2015-05-06 04:36:14.237299 osd.0 X.X.X.10:6800/26741 47 : cluster [WRN] 6 slow requests, 1 included below; oldest blocked for > 1128.060992 secs
> 2015-05-06 04:36:14.237310 osd.0 X.X.X.10:6800/26741 48 : cluster [WRN] slow request 30.869139 seconds old, received at 2015-05-06 04:35:43.368075: osd_op(client.6515.0:1 default.region [getxattrs,stat] 4.9a566808 ack+read+known_if_redirected e9) currently reached_pg
> 2015-05-06 04:36:20.142985 mon.0 X.X.X.6:6789/0 165 : cluster [INF] osdmap e11: 2 osds: 1 up, 1 in
> 2015-05-06 04:36:20.249438 mon.0 X.X.X.6:6789/0 166 : cluster [INF] pgmap v25: 120 pgs: 120 undersized+degraded+peered; 0 bytes data, 4129 MB used, 4610 MB / 8740 MB avail
> 2015-05-06 04:36:21.437516 mon.0 X.X.X.6:6789/0 167 : cluster [INF] osd.1 X.X.X.6:6800/25630 boot
> 2015-05-06 04:36:21.465908 mon.0 X.X.X.6:6789/0 168 : cluster [INF] osdmap e12: 2 osds: 2 up, 2 in
> 2015-05-06 04:36:21.551237 mon.0 X.X.X.6:6789/0 169 : cluster [INF] pgmap v26: 120 pgs: 120 undersized+degraded+peered; 0 bytes data, 4129 MB used, 4610 MB / 8740 MB avail
> 2015-05-06 04:36:22.752684 mon.0 X.X.X.6:6789/0 170 : cluster [INF] osdmap e13: 2 osds: 2 up, 2 in
> 2015-05-06 04:36:22.845573 mon.0 X.X.X.6:6789/0 171 : cluster [INF] pgmap v27: 120 pgs: 120 undersized+degraded+peered; 0 bytes data, 4129 MB used, 4610 MB / 8740 MB avail
> 2015-05-06 04:36:15.237727 osd.0 X.X.X.10:6800/26741 49 : cluster [WRN] 7 slow requests, 1 included below; oldest blocked for > 1129.061337 secs
> 2015-05-06 04:36:15.237752 osd.0 X.X.X.10:6800/26741 50 : cluster [WRN] slow request 30.746163 seconds old, received at 2015-05-06 04:35:44.491396: osd_op(client.6524.0:1 default.region [getxattrs,stat] 4.9a566808 ack+read+known_if_redirected e9) currently reached_pg
> 2015-05-06 04:36:27.497253 mon.0 X.X.X.6:6789/0 174 : cluster [INF] pgmap v28: 120 pgs: 29 creating+activating+undersized+degraded, 33 creating+peering, 58 undersized+degraded+peered; 0 bytes data, 4129 MB used, 4610 MB / 8740 MB avail
> 2015-05-06 04:36:36.581344 mon.0 X.X.X.6:6789/0 175 : cluster [INF] pgmap v29: 120 pgs: 82 creating+activating+undersized+degraded, 33 creating+peering, 5 undersized+degraded+peered; 0 bytes data, 8258 MB used, 9221 MB / 17480 MB avail
> 2015-05-06 04:36:37.898349 mon.0 X.X.X.6:6789/0 176 : cluster [INF] pgmap v30: 120 pgs: 115 creating+activating+undersized+degraded, 5 undersized+degraded+peered; 0 bytes data, 8258 MB used, 9221 MB / 17480 MB avail
> 2015-05-06 04:36:42.080151 mon.0 X.X.X.6:6789/0 177 : cluster [INF] pgmap v31: 120 pgs: 120 creating+activating+undersized+degraded; 0 bytes data, 8258 MB used, 9221 MB / 17480 MB avail
> 2015-05-06 04:36:44.245929 osd.0 X.X.X.10:6800/26741 51 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 60.877785 secs
> 2015-05-06 04:36:44.245966 osd.0 X.X.X.10:6800/26741 52 : cluster [WRN] slow request 60.877785 seconds old, received at 2015-05-06 04:35:43.368075: osd_op(client.6515.0:1 default.region [getxattrs,stat] 4.9a566808 ack+read+known_if_redirected e9) currently reached_pg
> 2015-05-06 04:36:47.633263 mon.0 X.X.X.6:6789/0 178 : cluster [INF] pgmap v32: 120 pgs: 10 active+undersized+degraded, 110 creating+activating+undersized+degraded; 0 bytes data, 8259 MB used, 9220 MB / 17480 MB avail
> 2015-05-06 04:36:49.936579 mon.0 X.X.X.6:6789/0 179 : cluster [INF] pgmap v33: 120 pgs: 52 active+undersized+degraded, 68 creating+activating+undersized+degraded; 0 bytes data, 8259 MB used, 9220 MB / 17480 MB avail
> 2015-05-06 04:36:52.514385 mon.0 X.X.X.6:6789/0 180 : cluster [INF] pgmap v34: 120 pgs: 68 active+undersized+degraded, 52 creating+activating+undersized+degraded; 0 bytes data, 8259 MB used, 9220 MB / 17480 MB avail
> 2015-05-06 04:36:51.247764 osd.0 X.X.X.10:6800/26741 53 : cluster [WRN] 2 slow requests, 1 included below; oldest blocked for > 67.879621 secs
> 2015-05-06 04:36:51.247774 osd.0 X.X.X.10:6800/26741 54 : cluster [WRN] slow request 30.177047 seconds old, received at 2015-05-06 04:36:21.070649: osd_op(client.24148.0:1 default.region [getxattrs,stat] 4.9a566808 ack+read+known_if_redirected e11) currently reached_pg
> 2015-05-06 04:36:56.286598 mon.0 X.X.X.6:6789/0 181 : cluster [INF] HEALTH_WARN; 120 pgs degraded; 52 pgs stuck inactive; 120 pgs stuck unclean; 120 pgs undersized; 2 requests are blocked > 32 sec
> 2015-05-06 04:36:56.964127 mon.0 X.X.X.6:6789/0 182 : cluster [INF] pgmap v35: 120 pgs: 74 active+undersized+degraded, 46 creating+activating+undersized+degraded; 0 bytes data, 8259 MB used, 9220 MB / 17480 MB avail
> 2015-05-06 04:36:58.232024 mon.0 X.X.X.6:6789/0 183 : cluster [INF] pgmap v36: 120 pgs: 88 active+undersized+degraded, 32 creating+activating+undersized+degraded; 0 bytes data, 8259 MB used, 9220 MB / 17480 MB avail
> 2015-05-06 04:37:01.806001 mon.0 X.X.X.6:6789/0 184 : cluster [INF] pgmap v37: 120 pgs: 92 active+undersized+degraded, 28 creating+activating+undersized+degraded; 0 bytes data, 8259 MB used, 9220 MB / 17480 MB avail
> 2015-05-06 04:37:00.089515 osd.1 X.X.X.6:6800/25630 1 : cluster [WRN] client.24148 X.X.X.6:0/1025777 misdirected client.24148.0:2 pg 4.e01f9ae to osd.1 not [0,1] in e13/13
> 2015-05-06 04:37:00.091454 osd.1 X.X.X.6:6800/25630 2 : cluster [WRN] client.6515 X.X.X.10:0/1024498 misdirected client.6515.0:2 pg 4.e01f9ae to osd.1 not [0,1] in e13/13
> --8<---------------cut here---------------end--------------->8---
>
> Going by the logs it looks like the client (rgw) is trying to contact
> the non primary pg in an acting set, and this problem persisted even
> when ceph cluster become at a healthy + active + clean state.
>
> Ultimately in this case the problem was solved by manually removing the
> `.rgw.root` pool and restarting radosgw which created the pools again,
> after which everything works as expected. We are seeing this issue
> intermittently in both firefly & hammer in our test scenarios, where
> radosgw is started at as soon as ceph is deployed (without waiting for
> the cluster to become healthy). Any ideas on the cause & possible
> solutions to this problem?

Seeing this in the firefly cluster as well. Tried a couple of rados
commands on the .rgw.root pool this is what is happening:

abhi@st:~$ sudo rados -p .rgw.root put test.txt test.txt
error putting .rgw.root/test.txt: (6) No such device or address

abhi@st:~$ sudo ceph osd map .rgw.root test.txt
osdmap e83 pool '.rgw.root' (6) object 'test.txt' -> pg 6.8b0b6108
(6.0) -> up ([1,2,0], p1) acting ([1,2,0], p1)

abhi@st:~$ sudo ceph pg map 6.8b0b6108
osdmap e83 pg 6.8b0b6108 (6.0) -> up [0,2,1] acting [0,2,1]

Looks like the osd map says the object must go to primary osd as 1,
whereas pg map says that the pg is hosted with 0 as primary.

# osd.0.log
---------
2015-05-13 14:10:41.536376 7fc68905b700 10 osd.0 83  new session
0x7fc6a7f3a5a0 con=0x7fc6ab1df860 addr=10.0.0.85:0/1011115
2015-05-13 14:10:41.536439 7fc68905b700 10 osd.0 83  session
0x7fc6a7f3a5a0 client.admin has caps osdcap[grant(*)] 'allow *'
2015-05-13 14:10:41.536841 7fc69667b700 10 osd.0 83 do_waiters -- start
2015-05-13 14:10:41.536849 7fc69667b700 10 osd.0 83 do_waiters -- finish
2015-05-13 14:10:41.536851 7fc69667b700 20 osd.0 83 _dispatch
0x7fc6aaa78640 osd_op(client.91462.0:1 test.txt [writefull 0~12]
6.8b0b610
8 ondisk+write e83) v4
2015-05-13 14:10:41.536883 7fc69667b700 15 osd.0 83
require_same_or_newer_map 83 (i am 83) 0x7fc6aaa78640
2015-05-13 14:10:41.536887 7fc69667b700 20 osd.0 83
_share_map_incoming client.91462 10.0.0.85:0/1011115 83
2015-05-13 14:10:41.536908 7fc69667b700 15 osd.0 83 enqueue_op
0x7fc6a7e50580 prio 63 cost 12 latency 0.000167
osd_op(client.91462.0:1 t
est.txt [writefull 0~12] 6.8b0b6108 ondisk+write e83) v4
2015-05-13 14:10:41.536927 7fc69667b700 10 osd.0 83 do_waiters -- start
2015-05-13 14:10:41.536933 7fc69667b700 10 osd.0 83 do_waiters -- finish
2015-05-13 14:10:41.536951 7fc68fe6e700 10 osd.0 83 dequeue_op
0x7fc6a7e50580 prio 63 cost 12 latency 0.000210
osd_op(client.91462.0:1 t
est.txt [writefull 0~12] 6.8b0b6108 ondisk+write e83) v4 pg pg[6.0( v
79'1 (0'0,79'1] local-les=83 n=1 ec=5 les/c 83/83 82/82/82) [1,2,0
] r=2 lpr=82 pi=5-81/34 luod=0'0 crt=0'0 lcod 0'0 active]
2015-05-13 14:10:41.536989 7fc68fe6e700 20 osd.0 pg_epoch: 83 pg[6.0(
v 79'1 (0'0,79'1] local-les=83 n=1 ec=5 les/c 83/83 82/82/82) [1,2
,0] r=2 lpr=82 pi=5-81/34 luod=0'0 crt=0'0 lcod 0'0 active]
op_has_sufficient_caps pool=6 (.rgw.root ) owner=18446744073709551615
need_r
ead_cap=0 need_write_cap=1 need_class_read_cap=0 need_class_write_cap=0 -> yes
2015-05-13 14:10:41.537006 7fc68fe6e700  7 osd.0 83 pg[6.0( v 79'1
(0'0,79'1] local-les=83 n=1 ec=5 les/c 83/83 82/82/82) [1,2,0] r=2 lp
r=82 pi=5-81/34 luod=0'0 crt=0'0 lcod 0'0 active] misdirected op in 83
2015-05-13 14:10:41.537024 7fc68fe6e700  0 log [WRN] : client.91462
10.0.0.85:0/1011115 misdirected client.91462.0:1 pg 6.8b0b6108 to os
d.0 not [1,2,0] in e83/83
2015-05-13 14:10:41.537056 7fc68fe6e700 10 osd.0 83 dequeue_op
0x7fc6a7e50580 finish
2015-05-13 14:10:41.538077 7fc69667b700  1 osd.0 83 ms_handle_reset
con 0x7fc6ab1df860 session 0x7fc6a7f3a5a0
2015-05-13 14:10:41.577206 7fc69eda4700  5 osd.0 83 tick


Writing to the same pool for some other object names seem to pass
successfully. What could be the possible cause & solution for this
scenario.

Regards
Abhishek
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux