Mostly I'm using ceph as a storage to my vms in Proxmox. I have radosgw but only for tests. It doesn't seem cause of problem.
I've tuned these parameters. It should improve speed of requests in recovery stage, but I receive warnings anyway:
osd_client_op_priority = 63
osd_recovery_op_priority = 1
osd_max_backfills = 1
osd_recovery_max_active = 1
osd_recovery_threads = 1
С уважением,
Ухина Ольга
Моб. тел.: 8(905)-566-46-62
2017-10-19 11:06 GMT+03:00 Sean Purdy <s.purdy@xxxxxxxxxxxxxxxx>:
Are you using radosgw? I found this page useful when I had a similar issue:
http://www.osris.org/performance/rgw.html
Sean
> ______________________________
On Wed, 18 Oct 2017, Ольга Ухина said:
> Hi!
>
> I have a problem with ceph luminous 12.2.1. It was upgraded from kraken,
> but I'm not sure if it was a problem in kraken.
> I have slow requests on different OSDs on random time (for example at
> night, but I don't see any problems at the time of problem with disks, CPU,
> there is possibility of network problem at night). During daytime I have
> not this problem.
> Almost all requests are nearly 30 seconds, so I receive warnings like this:
>
> 2017-10-18 01:20:26.147758 mon.st3 mon.0 10.192.1.78:6789/0 22686 : cluster
> [WRN] Health check failed: 1 slow requests are blocked > 32 sec
> (REQUEST_SLOW)
> 2017-10-18 01:20:28.025315 mon.st3 mon.0 10.192.1.78:6789/0 22687 : cluster
> [WRN] overall HEALTH_WARN 1 slow requests are blocked > 32 sec
> 2017-10-18 01:20:32.166758 mon.st3 mon.0 10.192.1.78:6789/0 22688 : cluster
> [WRN] Health check update: 38 slow requests are blocked > 32 sec
> (REQUEST_SLOW)
> 2017-10-18 01:20:38.187326 mon.st3 mon.0 10.192.1.78:6789/0 22689 : cluster
> [WRN] Health check update: 49 slow requests are blocked > 32 sec
> (REQUEST_SLOW)
> 2017-10-18 01:20:38.727421 osd.23 osd.23 10.192.1.158:6840/3659 1758 :
> cluster [WRN] 27 slow requests, 5 included below; oldest blocked for >
> 30.839843 secs
> 2017-10-18 01:20:38.727425 osd.23 osd.23 10.192.1.158:6840/3659 1759 :
> cluster [WRN] slow request 30.814060 seconds old, received at 2017-10-18
> 01:20:07.913300: osd_op(client.12464272.1:56610561 31.410dd55
> 5 31:aaabb082:::rbd_data.7b3e22ae8944a. 0000000000012e2c:head
> [set-alloc-hint object_size 4194304 write_size 4194304,write 2977792~4096]
> snapc 0=[] ondisk+write e10926) currently sub_op_commit_rec from 39
> 2017-10-18 01:20:38.727431 osd.23 osd.23 10.192.1.158:6840/3659 1760 :
> cluster [WRN] slow request 30.086589 seconds old, received at 2017-10-18
> 01:20:08.640771: osd_repop(client.12464806.1:17326170 34.242
> e10926/10860 34:426def95:::rbd_data.acdc9238e1f29. 0000000000001231:head v
> 10926'4976910) currently write_thread_in_journal_buffer
> 2017-10-18 01:20:38.727433 osd.23 osd.23 10.192.1.158:6840/3659 1761 :
> cluster [WRN] slow request 30.812569 seconds old, received at 2017-10-18
> 01:20:07.914791: osd_repop(client.12464272.1:56610570 31.1eb
> e10926/10848 31:d797c167:::rbd_data.7b3e22ae8944a. 0000000000013828:head v
> 10926'135331) currently write_thread_in_journal_buffer
> 2017-10-18 01:20:38.727436 osd.23 osd.23 10.192.1.158:6840/3659 1762 :
> cluster [WRN] slow request 30.807328 seconds old, received at 2017-10-18
> 01:20:07.920032: osd_op(client.12464272.1:56610586 31.3f2f2e2
> 6 31:6474f4fc:::rbd_data.7b3e22ae8944a. 0000000000013673:head
> [set-alloc-hint object_size 4194304 write_size 4194304,write 12288~4096]
> snapc 0=[] ondisk+write e10926) currently sub_op_commit_rec from 30
> 2017-10-18 01:20:38.727438 osd.23 osd.23 10.192.1.158:6840/3659 1763 :
> cluster [WRN] slow request 30.807253 seconds old, received at 2017-10-18
> 01:20:07.920107: osd_op(client.12464272.1:56610588 31.2d23291
> 8 31:1894c4b4:::rbd_data.7b3e22ae8944a. 0000000000013a5b:head
> [set-alloc-hint object_size 4194304 write_size 4194304,write 700416~4096]
> snapc 0=[] ondisk+write e10926) currently sub_op_commit_rec from 28
> 2017-10-18 01:20:38.006142 osd.39 osd.39 10.192.1.159:6808/3323 1501 :
> cluster [WRN] 2 slow requests, 2 included below; oldest blocked for >
> 30.092091 secs
> 2017-10-18 01:20:38.006153 osd.39 osd.39 10.192.1.159:6808/3323 1502 :
> cluster [WRN] slow request 30.092091 seconds old, received at 2017-10-18
> 01:20:07.913962: osd_op(client.12464272.1:56610570 31.e683e9e
> b 31:d797c167:::rbd_data.7b3e22ae8944a. 0000000000013828:head
> [set-alloc-hint object_size 4194304 write_size 4194304,write 143360~4096]
> snapc 0=[] ondisk+write e10926) currently op_applied
> 2017-10-18 01:20:38.006159 osd.39 osd.39 10.192.1.159:6808/3323 1503 :
> cluster [WRN] slow request 30.086123 seconds old, received at 2017-10-18
> 01:20:07.919930: osd_op(client.12464272.1:56610587 31.e683e9eb
> 31:d797c167:::rbd_data.7b3e22ae8944a. 0000000000013828:head [set-alloc-hint
> object_size 4194304 write_size 4194304,write 3256320~4096] snapc 0=[]
> ondisk+write e10926) currently op_applied
> 2017-10-18 01:20:38.374091 osd.38 osd.38 10.192.1.159:6857/236992 1387 :
> cluster [WRN] 2 slow requests, 2 included below; oldest blocked for >
> 30.449318 secs
> 2017-10-18 01:20:38.374107 osd.38 osd.38 10.192.1.159:6857/236992 1388 :
> cluster [WRN] slow request 30.449318 seconds old, received at 2017-10-18
> 01:20:07.924670: osd_op(client.12464272.1:56610603 31.fe179bed
> 31:b7d9e87f:::rbd_data.7b3e22ae8944a. 0000000000013a60:head [set-alloc-hint
> object_size 4194304 write_size 4194304,write 143360~4096] snapc 0=[]
> ondisk+write e10926) currently op_applied
>
>
> How can I determine the reason of problem? Should I only adjust
> osd_op_complaint_time to 40-50 seconds for example and forget about it? Or
> it's a problem for my cluster?
>
> Thanks!
> Best regards, Olga.
_________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com