Ceph 9.2.1, Centos 7.2
I noticed these errors sometimes when removing objects. It's getting a 'No such file or directory' on the OSD when deleting things sometimes. Any ideas here? Is this expected?
(i anonymized the full filename, but it's all the same file)
RGW log:
2016-05-04 23:14:32.216324 7f92b7741700 1 -- 10.29.16.57:0/2874775405 <== osd.11 10.30.1.42:6808/7454 45 ==== osd_op_reply(476 default.42048218.15_ ... fb66a4923b2029a6588adb1245fa3fe9 [call] v0'0 uv551321 _ondisk_ = -2 ((2) No such file or directory)) v6 ==== 349+0+0 (2101432025 0 0) 0x7f93b403aca0 con 0x7f946001b3d0
2016-05-04 23:14:32.216587 7f931b7b6700 1 -- 10.29.16.57:0/2874775405 --> 10.30.1.42:6808/7454 -- osd_op(client.45297956.0:477 .dir.default.42048218.15.12 [call rgw.bucket_complete_op] 11.74c941dd ack+ondisk+write+known_if_redirected e104420) v6 -- ?+0 0x7f95100fcb40 con 0x7f946001b3d0
2016-05-04 23:14:32.216807 7f931b7b6700 2 req 4238:22.224049:s3:DELETE fb66a4923b2029a6588adb1245fa3fe9:delete_obj:http status=204
2016-05-04 23:14:32.216826 7f931b7b6700 1 ====== req done req=0x7f9510091e50 http_status=204 ======
2016-05-04 23:14:32.216920 7f931b7b6700 1 civetweb: 0x7f95100008c0: 10.29.16.57 - - [04/May/2016:23:14:09 -0700] "DELETE fb66a4923b2029a6588adb1245fa3fe9 HTTP/1.1" 204 0 - Boto/2.38.0 Python/2.7.5 Linux/3.10.0-327.10.1.el7.x86_64
Log on the OSD with debug ms=10:
2016-05-04 23:14:31.716246 7fccbec2a700 0 <cls> cls/rgw/cls_rgw.cc:1959: ERROR: rgw_obj_remove(): cls_cxx_remove returned -2
2016-05-04 23:14:31.716379 7fccbec2a700 1 -- 10.30.1.42:6808/7454 --> 10.29.16.57:0/939886467 -- osd_op_reply(525 default.42048218.15_ ... fb66a4923b2029a6588adb1245fa3fe9 [call rgw.obj_remove] v0'0 uv551321 _ondisk_ = -2 ((2) No such file or directory)) v6 -- ?+0 0x7fcd05f0d600 con 0x7fcd01865fa0
2016-05-04 23:14:31.716563 7fcc59cb0700 10 -- 10.30.1.42:6808/7454 >> 10.29.16.57:0/939886467 pipe(0x7fcd0e29f000 sd=527 :6808 s=2 pgs=16 cs=1 l=1 c=0x7fcd01865fa0).writer: state = open policy.server=1
2016-05-04 23:14:31.716646 7fcc59cb0700 10 -- 10.30.1.42:6808/7454 >> 10.29.16.57:0/939886467 pipe(0x7fcd0e29f000 sd=527 :6808 s=2 pgs=16 cs=1 l=1 c=0x7fcd01865fa0).writer: state = open policy.server=1
2016-05-04 23:14:31.716983 7fcc76585700 10 -- 10.30.1.42:6808/7454 >> 10.29.16.57:0/3924513385 pipe(0x7fcd0c87a000 sd=542 :6808 s=2 pgs=10 cs=1 l=1 c=0x7fcced99f860).reader wants 456 bytes from policy throttler 19523/524288000
2016-05-04 23:14:31.717006 7fcc76585700 10 -- 10.30.1.42:6808/7454 >> 10.29.16.57:0/3924513385 pipe(0x7fcd0c87a000 sd=542 :6808 s=2 pgs=10 cs=1 l=1 c=0x7fcced99f860).reader wants 456 from dispatch throttler 0/104857600
2016-05-04 23:14:31.717029 7fcc76585700 10 -- 10.30.1.42:6808/7454 >> 10.29.16.57:0/3924513385 pipe(0x7fcd0c87a000 sd=542 :6808 s=2 pgs=10 cs=1 l=1 c=0x7fcced99f860).aborted = 0
2016-05-04 23:14:31.717056 7fcc76585700 10 -- 10.30.1.42:6808/7454 >> 10.29.16.57:0/3924513385 pipe(0x7fcd0c87a000 sd=542 :6808 s=2 pgs=10 cs=1 l=1 c=0x7fcced99f860).reader got message 111 0x7fcd120c42c0 osd_op(client.45297946.0:411 .dir.default.42048218.15.15 [call rgw.bucket_prepare_op] 11.c01f555d ondisk+write+known_if_redirected e104420) v6
2016-05-04 23:14:31.717077 7fcc76585700 1 -- 10.30.1.42:6808/7454 <== client.45297946 10.29.16.57:0/3924513385 111 ==== osd_op(client.45297946.0:411 .dir.default.42048218.15.15 [call rgw.bucket_prepare_op] 11.c01f555d ondisk+write+known_if_redirected e104420) v6 ==== 213+0+243 (3423964475 0 1018669967) 0x7fcd120c42c0 con 0x7fcced99f860
2016-05-04 23:14:31.717081 7fcc74538700 10 -- 10.30.1.42:6808/7454 >> 10.29.16.57:0/3924513385 pipe(0x7fcd0c87a000 sd=542 :6808 s=2 pgs=10 cs=1 l=1 c=0x7fcced99f860).writer: state = open policy.server=1
2016-05-04 23:14:31.717100 7fcc74538700 10 -- 10.30.1.42:6808/7454 >> 10.29.16.57:0/3924513385 pipe(0x7fcd0c87a000 sd=542 :6808 s=2 pgs=10 cs=1 l=1 c=0x7fcced99f860).write_ack 111
--
2016-05-04 23:14:32.202608 7fccb49ff700 10 -- 10.30.1.42:6809/7454 dispatch_throttle_release 83 to dispatch throttler 83/104857600
2016-05-04 23:14:32.203922 7fccb10c6700 10 -- 10.30.1.42:6809/7454 >> 10.30.1.124:6813/4012396 pipe(0x7fcd04860000 sd=199 :6809 s=2 pgs=46808 cs=1 l=0 c=0x7fcd047679c0).reader got ack seq 1220 >= 1220 on 0x7fcd053a5e00 osd_repop(client.45297861.0:514 11.5d 11/c01f555d/.dir.default.42048218.15.15/head v 104420'1406810) v1
2016-05-04 23:14:32.204040 7fccb10c6700 10 -- 10.30.1.42:6809/7454 >> 10.30.1.124:6813/4012396 pipe(0x7fcd04860000 sd=199 :6809 s=2 pgs=46808 cs=1 l=0 c=0x7fcd047679c0).reader wants 83 from dispatch throttler 0/104857600
2016-05-04 23:14:32.204084 7fccb10c6700 10 -- 10.30.1.42:6809/7454 >> 10.30.1.124:6813/4012396 pipe(0x7fcd04860000 sd=199 :6809 s=2 pgs=46808 cs=1 l=0 c=0x7fcd047679c0).aborted = 0
2016-05-04 23:14:32.204103 7fccb10c6700 10 -- 10.30.1.42:6809/7454 >> 10.30.1.124:6813/4012396 pipe(0x7fcd04860000 sd=199 :6809 s=2 pgs=46808 cs=1 l=0 c=0x7fcd047679c0).reader got message 1236 0x7fcd05d5f440 osd_repop_reply(client.45297861.0:514 11.5d ondisk, result = 0) v1
2016-05-04 23:14:32.204127 7fccaeda3700 10 -- 10.30.1.42:6809/7454 >> 10.30.1.124:6813/4012396 pipe(0x7fcd04860000 sd=199 :6809 s=2 pgs=46808 cs=1 l=0 c=0x7fcd047679c0).writer: state = open policy.server=0
2016-05-04 23:14:32.204146 7fccaeda3700 10 -- 10.30.1.42:6809/7454 >> 10.30.1.124:6813/4012396 pipe(0x7fcd04860000 sd=199 :6809 s=2 pgs=46808 cs=1 l=0 c=0x7fcd047679c0).write_ack 1236
2016-05-04 23:14:32.204123 7fccb10c6700 1 -- 10.30.1.42:6809/7454 <== osd.107 10.30.1.124:6813/4012396 1236 ==== osd_repop_reply(client.45297861.0:514 11.5d ondisk, result = 0) v1 ==== 83+0+0 (3190154705 0 0) 0x7fcd05d5f440 con 0x7fcd047679c0
2016-05-04 23:14:32.204161 7fccaeda3700 10 -- 10.30.1.42:6809/7454 >> 10.30.1.124:6813/4012396 pipe(0x7fcd04860000 sd=199 :6809 s=2 pgs=46808 cs=1 l=0 c=0x7fcd047679c0).writer: state = open policy.server=0
2016-05-04 23:14:32.204513 7fccb10c6700 10 -- 10.30.1.42:6809/7454 dispatch_throttle_release 83 to dispatch throttler 83/104857600
2016-05-04 23:14:32.216214 7fccbec2a700 0 <cls> cls/rgw/cls_rgw.cc:1959: ERROR: rgw_obj_remove(): cls_cxx_remove returned -2
2016-05-04 23:14:32.216299 7fccbec2a700 1 -- 10.30.1.42:6808/7454 --> 10.29.16.57:0/2874775405 -- osd_op_reply(476 default.42048218.15_... [call rgw.obj_remove] v0'0 uv551321 _ondisk_ = -2 ((2) No such file or directory)) v6 -- ?+0 0x7fcd0af6edc0 con 0x7fcd018679c0
2016-05-04 23:14:32.216507 7fcc652f1700 10 -- 10.30.1.42:6808/7454 >> 10.29.16.57:0/2874775405 pipe(0x7fcd10596000 sd=539 :6808 s=2 pgs=3 cs=1 l=1 c=0x7fcd018679c0).writer: state = open policy.server=1
2016-05-04 23:14:32.216584 7fcc652f1700 10 -- 10.30.1.42:6808/7454 >> 10.29.16.57:0/2874775405 pipe(0x7fcd10596000 sd=539 :6808 s=2 pgs=3 cs=1 l=1 c=0x7fcd018679c0).writer: state = open policy.server=1
2016-05-04 23:14:32.216648 7fcc60134700 10 -- 10.30.1.42:6808/7454 >> 10.29.16.57:0/2143126359 pipe(0x7fcd0e189000 sd=507 :6808 s=2 pgs=6 cs=1 l=1 c=0x7fcd04072260).aborted = 0
2016-05-04 23:14:32.216669 7fcc60134700 10 -- 10.30.1.42:6808/7454 >> 10.29.16.57:0/2143126359 pipe(0x7fcd0e189000 sd=507 :6808 s=2 pgs=6 cs=1 l=1 c=0x7fcd04072260).reader got message 115 0x7fcd06021880 ping magic: 0 v1
2016-05-04 23:14:32.216692 7fcc60134700 10 -- 10.30.1.42:6808/7454 >> 10.29.16.57:0/2143126359 pipe(0x7fcd0e189000 sd=507 :6808 s=2 pgs=6 cs=1 l=1 c=0x7fcd04072260).reader wants 1 message from policy throttler 100/100
2016-05-04 23:14:32.216693 7fcc60033700 10 -- 10.30.1.42:6808/7454 >> 10.29.16.57:0/2143126359 pipe(0x7fcd0e189000 sd=507 :6808 s=2 pgs=6 cs=1 l=1 c=0x7fcd04072260).writer: state = open policy.server=1
2016-05-04 23:14:32.216707 7fcc60033700 10 -- 10.30.1.42:6808/7454 >> 10.29.16.57:0/2143126359 pipe(0x7fcd0e189000 sd=507 :6808 s=2 pgs=6 cs=1 l=1 c=0x7fcd04072260).write_ack 115
2016-05-04 23:14:32.216701 7fcccec4a700 1 -- 10.30.1.42:6808/7454 <== client.45297826 10.29.16.57:0/2143126359 115 ==== ping magic: 0 v1 ==== 0+0+0 (0 0 0) 0x7fcd06021880 con 0x7fcd04072260
2016-05-04 23:14:32.216719 7fcc60033700 10 -- 10.30.1.42:6808/7454 >> 10.29.16.57:0/2143126359 pipe(0x7fcd0e189000 sd=507 :6808 s=2 pgs=6 cs=1 l=1 c=0x7fcd04072260).writer: state = open policy.server=1
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com