I am trying to understand the cause of a problem we started encountering a few weeks ago. There are 30 or so per hour messages on OSD nodes of type: ceph-osd.33.log:2017-04-10 13:42:39.935422 7fd7076d8700 0 bad crc in data 2227614508 != exp 2469058201 and 2017-04-10 13:42:39.939284 7fd722c42700 0 -- 10.80.3.25:6826/5752 submit_message osd_op_reply(1826606251 rbd_data.922d95238e1f29.00000000000101bf [set-alloc-hint object_size 16777216 write_size 16777216,write 6328320~12288] v103574'18626765 uv18626765 ondisk = 0) v6 remote, 10.80.3.216:0/1934733503, failed lossy con, dropping message 0x3b55600 On a client sometimes, but not corresponding to the above: Apr 10 11:53:15 roc-5r-scd216 kernel: [4906599.023174] libceph: osd96 10.80.3.25:6822 socket error on write And from time to time, slow requests: 2017-04-10 13:00:04.280686 osd.91 10.80.3.45:6808/5665 231 : cluster [WRN] slow request 30.108325 seconds old, received at 2017-04-10 12:59:34.172283: osd_op(client.11893449.1:324079247 rbd_data.8fcdfb238e1f29.00000000000187e7 [set-alloc-hint object_size 16777216 write_size 16777216,write 10772480~8192] 14.ed0bcdec ondisk+write e103545) currently waiting for subops from 2,104 2017-04-10 13:00:06.280949 osd.91 10.80.3.45:6808/5665 232 : cluster [WRN] 2 slow requests, 1 included below; oldest blocked for > 32.108610 secs Questions: 1. Is there any way to drill further into the "bad crc" message? sometimes they have nothing before or after them, but how to determine from/to what this came from - another OSD, client, which one? 2. Network seems OK - no errors on NICs, regression testing does not show any issues. I realize this can be disk response, but using Christian Balzer's atop recommendation shows a pretty normal system. What is my best course of troubleshooting here - dump historic ops on OSD, wireshark the links or anything else? 3. Christian, if you are looking at this, what would be your red flags in atop? Thank you. -- Alex Gorbachev Storcium _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com