radosgw flush_read_list(): d->client_c->handle_data() returned -5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Any idea what is going on here? I get these intermittently, especially with very large file.

The client is doing RANGE requests on this >51 GB file, incrementally fetching later chunks.

2016-02-24 16:30:59.669561 7fd33b7fe700  1 ====== starting new request req=0x7fd32c0879c0 =====
2016-02-24 16:30:59.669675 7fd33b7fe700  2 req 3648804:0.000114::GET /<redacted>/int8-0.181.4-1654016.2016-02-23_03-53-42.pkg::initializing for trans_id = tx00000000000000037ad24-0056ce4b43-259914b-default
2016-02-24 16:30:59.669687 7fd33b7fe700 10 host=<redacted>
2016-02-24 16:30:59.669757 7fd33b7fe700 10 s->object=<redacted>/int8-0.181.4-1654016.2016-02-23_03-53-42.pkg s->bucket=<redacted>
2016-02-24 16:30:59.669767 7fd33b7fe700  2 req 3648804:0.000206:s3:GET /<redacted>/int8-0.181.4-1654016.2016-02-23_03-53-42.pkg::getting op
2016-02-24 16:30:59.669776 7fd33b7fe700  2 req 3648804:0.000215:s3:GET /<redacted>/int8-0.181.4-1654016.2016-02-23_03-53-42.pkg:get_obj:authorizing
2016-02-24 16:30:59.669785 7fd33b7fe700  2 req 3648804:0.000224:s3:GET /<redacted>/int8-0.181.4-1654016.2016-02-23_03-53-42.pkg:get_obj:reading permissions
2016-02-24 16:30:59.673797 7fd33b7fe700 10 manifest: total_size = 50346000384
2016-02-24 16:30:59.673841 7fd33b7fe700  2 req 3648804:0.004280:s3:GET /<redacted>/int8-0.181.4-1654016.2016-02-23_03-53-42.pkg:get_obj:init op
2016-02-24 16:30:59.673867 7fd33b7fe700 10 cache get: name=.users.uid+<redacted> : hit
2016-02-24 16:30:59.673881 7fd33b7fe700 10 cache get: name=.users.uid+<redacted> : hit
2016-02-24 16:30:59.673921 7fd33b7fe700  2 req 3648804:0.004360:s3:GET /<redacted>/int8-0.181.4-1654016.2016-02-23_03-53-42.pkg:get_obj:verifying op mask
2016-02-24 16:30:59.673929 7fd33b7fe700  2 req 3648804:0.004369:s3:GET /<redacted>/int8-0.181.4-1654016.2016-02-23_03-53-42.pkg:get_obj:verifying op permissions
2016-02-24 16:30:59.673941 7fd33b7fe700  5 Searching permissions for uid=anonymous mask=49
2016-02-24 16:30:59.673944 7fd33b7fe700  5 Permissions for user not found
2016-02-24 16:30:59.673946 7fd33b7fe700  5 Searching permissions for group=1 mask=49
2016-02-24 16:30:59.673949 7fd33b7fe700  5 Found permission: 1
2016-02-24 16:30:59.673951 7fd33b7fe700  5 Searching permissions for group=2 mask=49
2016-02-24 16:30:59.673953 7fd33b7fe700  5 Permissions for group not found
2016-02-24 16:30:59.673955 7fd33b7fe700  5 Getting permissions id=anonymous owner=<redacted> perm=1
2016-02-24 16:30:59.673957 7fd33b7fe700 10  uid=anonymous requested perm (type)=1, policy perm=1, user_perm_mask=15, acl perm=1
2016-02-24 16:30:59.673961 7fd33b7fe700  2 req 3648804:0.004400:s3:GET /<redacted>/int8-0.181.4-1654016.2016-02-23_03-53-42.pkg:get_obj:verifying op params
2016-02-24 16:30:59.673965 7fd33b7fe700  2 req 3648804:0.004404:s3:GET /<redacted>/int8-0.181.4-1654016.2016-02-23_03-53-42.pkg:get_obj:executing
2016-02-24 16:30:59.674107 7fd33b7fe700  0 RGWObjManifest::operator++(): result: ofs=130023424 stripe_ofs=130023424 part_ofs=104857600 rule->part_size=52428800
2016-02-24 16:30:59.674193 7fd33b7fe700  0 RGWObjManifest::operator++(): result: ofs=134217728 stripe_ofs=134217728 part_ofs=104857600 rule->part_size=52428800
2016-02-24 16:30:59.674317 7fd33b7fe700  0 RGWObjManifest::operator++(): result: ofs=138412032 stripe_ofs=138412032 part_ofs=104857600 rule->part_size=52428800
2016-02-24 16:30:59.674433 7fd33b7fe700  0 RGWObjManifest::operator++(): result: ofs=142606336 stripe_ofs=142606336 part_ofs=104857600 rule->part_size=52428800
2016-02-24 16:31:00.046110 7fd33b7fe700  0 RGWObjManifest::operator++(): result: ofs=146800640 stripe_ofs=146800640 part_ofs=104857600 rule->part_size=52428800
2016-02-24 16:31:00.150966 7fd33b7fe700  0 RGWObjManifest::operator++(): result: ofs=150994944 stripe_ofs=150994944 part_ofs=104857600 rule->part_size=52428800
2016-02-24 16:31:00.151118 7fd33b7fe700  0 RGWObjManifest::operator++(): result: ofs=155189248 stripe_ofs=155189248 part_ofs=104857600 rule->part_size=52428800
2016-02-24 16:31:00.161000 7fd33b7fe700  0 RGWObjManifest::operator++(): result: ofs=157286400 stripe_ofs=157286400 part_ofs=157286400 rule->part_size=52428800
2016-02-24 16:31:00.199553 7fd33b7fe700  0 RGWObjManifest::operator++(): result: ofs=161480704 stripe_ofs=161480704 part_ofs=157286400 rule->part_size=52428800
2016-02-24 16:31:00.278308 7fd33b7fe700  0 RGWObjManifest::operator++(): result: ofs=165675008 stripe_ofs=165675008 part_ofs=157286400 rule->part_size=52428800
2016-02-24 16:31:00.312306 7fd33b7fe700  0 RGWObjManifest::operator++(): result: ofs=169869312 stripe_ofs=169869312 part_ofs=157286400 rule->part_size=52428800
2016-02-24 16:31:00.751626 7fd33b7fe700  0 RGWObjManifest::operator++(): result: ofs=174063616 stripe_ofs=174063616 part_ofs=157286400 rule->part_size=52428800
2016-02-24 16:31:00.833570 7fd33b7fe700  0 RGWObjManifest::operator++(): result: ofs=178257920 stripe_ofs=178257920 part_ofs=157286400 rule->part_size=52428800
2016-02-24 16:31:00.871774 7fd33b7fe700  0 ERROR: flush_read_list(): d->client_c->handle_data() returned -5
2016-02-24 16:31:00.872480 7fd33b7fe700  0 WARNING: set_req_state_err err_no=5 resorting to 500
2016-02-24 16:31:00.872561 7fd33b7fe700  0 ERROR: s->cio->print() returned err=-5
2016-02-24 16:31:00.872567 7fd33b7fe700  0 ERROR: s->cio->print() returned err=-5
2016-02-24 16:31:00.872571 7fd33b7fe700  0 ERROR: s->cio->print() returned err=-5
2016-02-24 16:31:00.872592 7fd33b7fe700  0 ERROR: s->cio->complete_header() returned err=-5
2016-02-24 16:31:00.872616 7fd33b7fe700  2 req 3648804:1.203055:s3:GET /<redacted>/int8-0.181.4-1654016.2016-02-23_03-53-42.pkg:get_obj:http status=500
2016-02-24 16:31:00.872629 7fd33b7fe700  1 ====== req done req=0x7fd32c0879c0 http_status=500 ======

ceph.conf relevant settings:

[client.sm-cephrgw5]
rgw enable ops log = False
rgw frontends = civetweb port=8080 num_threads=75 error_log_file=/var/log/ceph/civetweb.error.log access_log_file=/var/log/ceph/civetweb.access.log
rgw num rados handles = 10
rgw cache lru size = 30000
debug civetweb = 10
rgw override bucket index max shards = 23
debug rgw = 10

Ceph 9.2.0, cluster is HEALTH_OK

-Ben
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux