On Thu, Mar 21, 2019 at 3:20 PM Glen Baars <glen@xxxxxxxxxxxxxxxxxxxxxx> wrote: > > Thanks for that - we seem to be experiencing the wait in this section of the ops. > > { > "time": "2019-03-21 14:12:42.830191", > "event": "sub_op_committed" > }, > { > "time": "2019-03-21 14:12:43.699872", > "event": "commit_sent" > }, > > Does anyone know what that section is waiting for? Hi Glen, These are documented, to some extent, here. http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/ It looks like it may be taking a long time to communicate the commit message back to the client? Are these slow ops always the same client? > > Kind regards, > Glen Baars > > -----Original Message----- > From: Brad Hubbard <bhubbard@xxxxxxxxxx> > Sent: Thursday, 21 March 2019 8:23 AM > To: Glen Baars <glen@xxxxxxxxxxxxxxxxxxxxxx> > Cc: ceph-users@xxxxxxxxxxxxxx > Subject: Re: Slow OPS > > On Thu, Mar 21, 2019 at 12:11 AM Glen Baars <glen@xxxxxxxxxxxxxxxxxxxxxx> wrote: > > > > Hello Ceph Users, > > > > > > > > Does anyone know what the flag point ‘Started’ is? Is that ceph osd daemon waiting on the disk subsystem? > > This is set by "mark_started()" and is roughly set when the pg starts processing the op. Might want to capture dump_historic_ops output after the op completes. > > > > > > > > > Ceph 13.2.4 on centos 7.5 > > > > > > > > "description": "osd_op(client.1411875.0:422573570 5.18ds0 > > 5:b1ed18e5:::rbd_data.6.cf7f46b8b4567.000000000046e41a:head [read > > > > 1703936~16384] snapc 0=[] ondisk+read+known_if_redirected e30622)", > > > > "initiated_at": "2019-03-21 01:04:40.598438", > > > > "age": 11.340626, > > > > "duration": 11.342846, > > > > "type_data": { > > > > "flag_point": "started", > > > > "client_info": { > > > > "client": "client.1411875", > > > > "client_addr": "10.4.37.45:0/627562602", > > > > "tid": 422573570 > > > > }, > > > > "events": [ > > > > { > > > > "time": "2019-03-21 01:04:40.598438", > > > > "event": "initiated" > > > > }, > > > > { > > > > "time": "2019-03-21 01:04:40.598438", > > > > "event": "header_read" > > > > }, > > > > { > > > > "time": "2019-03-21 01:04:40.598439", > > > > "event": "throttled" > > > > }, > > > > { > > > > "time": "2019-03-21 01:04:40.598450", > > > > "event": "all_read" > > > > }, > > > > { > > > > "time": "2019-03-21 01:04:40.598499", > > > > "event": "dispatched" > > > > }, > > > > { > > > > "time": "2019-03-21 01:04:40.598504", > > > > "event": "queued_for_pg" > > > > }, > > > > { > > > > "time": "2019-03-21 01:04:40.598883", > > > > "event": "reached_pg" > > > > }, > > > > { > > > > "time": "2019-03-21 01:04:40.598905", > > > > "event": "started" > > > > } > > > > ] > > > > } > > > > } > > > > ], > > > > > > > > Glen > > > > This e-mail is intended solely for the benefit of the addressee(s) and any other named recipient. It is confidential and may contain legally privileged or confidential information. If you are not the recipient, any use, distribution, disclosure or copying of this e-mail is prohibited. The confidentiality and legal privilege attached to this communication is not waived or lost by reason of the mistaken transmission or delivery to you. If you have received this e-mail in error, please notify us immediately. > > _______________________________________________ > > ceph-users mailing list > > ceph-users@xxxxxxxxxxxxxx > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > -- > Cheers, > Brad > This e-mail is intended solely for the benefit of the addressee(s) and any other named recipient. It is confidential and may contain legally privileged or confidential information. If you are not the recipient, any use, distribution, disclosure or copying of this e-mail is prohibited. The confidentiality and legal privilege attached to this communication is not waived or lost by reason of the mistaken transmission or delivery to you. If you have received this e-mail in error, please notify us immediately. -- Cheers, Brad _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com