Re: Random slow requests without any load

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

Just a quick update about this if somebody else get the same issue:

The problem was with the firewall. Port range and established connection are allowed, but for some reasons it seems the tracking of connections are lost, leading to a strange state where one machine refuse data (RST are replied) and the sender never get the RST packed (even with 'related' packets allowed).

There was a similar post on this list in February ("Ceph and TCP States") where lossing of connections in conntrack created issues, but the fix, net.netfilter.nf_conntrack_tcp_be_liberal=1 did not improve that particular case.

As a workaround, we installed lighter rules for the firewall (allowing all packets from machines inside the cluster by default) and that "fixed" the issue :)

Have a nice day,

Le 7/4/19 à 6:14 PM, Maximilien Cuony a écrit :
Hello,

I do have a very strange situation: in a almost-new ceph cluster, I do have random requests blocked, leading to timeouts.

Example error from logs:

> [WRN] Health check failed: 8 slow requests are blocked > 32 sec. Implicated osds 12 (REQUEST_SLOW)

>7fd8bb0bd700  0 log_channel(cluster) log [WRN] : slow request 30.796124 seconds old, received at 2019-07-04 16:18:54.53038        8: osd_op(client.2829606.0:103 3.135 3:ac9abb76:::rbd_data.2b00ee6b8b4567.0000000000000000:head [set-alloc-hint object_size 4194304 write_size 419430 4,write 0~4096] snapc 0=[] ondisk+write+known_if_redirected e294) currently op_applied

This happens totally randomly. I'm not able to reproduce it: I never had the issue with benchmarks, I do have it occasionally when I start or stop of VM (it's a proxmox deployment and ceph / rdb is used as storage for VMs) or when I use the VM.

This is a example request stuck (with dump_ops_in_flight):

> { "description": "osd_op(client.2829606.0:103 3.135 3:ac9abb76:::rbd_data.2b00ee6b8b4567.0000000000000000:head [set-alloc-hint object_size 4194304 write_size 4194304,write 0~4096] snapc 0=[] ondisk+write+known_if_redirected e294)", "initiated_at": "2019-07-04 16:18:54.530388", "age": 196.315782, "duration": 196.315797, "type_data": { "flag_point": "waiting for sub ops", "client_info": { "client": "client.2829606", "client_addr": "10.3.5.40:0/444048627", "tid": 103 }, "events": [ { "time": "2019-07-04 16:18:54.530388", "event": "initiated" }, { "time": "2019-07-04 16:18:54.530429", "event": "queued_for_pg" }, { "time": "2019-07-04 16:18:54.530437", "event": "reached_pg" }, { "time": "2019-07-04 16:18:54.530455", "event": "started" }, { "time": "2019-07-04 16:18:54.530507", "event": "waiting for subops from 8" }, { "time": "2019-07-04 16:18:54.531020", "event": "op_commit" }, { "time": "2019-07-04 16:18:54.531024", "event": "op_applied" } ] } }

Since he seems to be waiting on osd 8, I tried to dump_ops_in_flight and to dump_historic_ops but there was nothing (witch is quite strange no ?).

The cluster has no load in general: There is no i/o errors, no requests on disks (iostat is at 99.+% idle), no cpu usage, no ethernet usage.

The OSD or the OSD waited on in subops are random.

If I restart the target osd, the request is unstuck. There is nothing else in logs / dmesg except this:

> 7fd8bf8f1700  0 -- 10.3.5.41:6809/16241 >> 10.3.5.42:6813/1015314 conn(0x555ddb9db800 :6809 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_msg accept connect_seq 39 vs existing csq=39 existing_state=STATE_CONNECTING

But not around errors, I'm not sure it's just debuging output.

On the network side, I had jumbo frames but disabling them changed nothing. Just in case, I do have a LACP bond to two switches (mlag/vtl), but I don't see any network issues (heavy pings are totally fine, even for a long time).

I kind of suspect the tcp connection of the OSD who is stuck / in a bad state for some reason, but I'm not sure what and how I can debug this.

My ceph version is: 12.2.12 (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable)

Do you have any idea/pointer/help on what is the issue / what can I try to debug / check ?

Thanks a lot and have a nice day!

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux