Hi Jan,
Thanks for your response.
How exactly do you know this is the cause? This is usually just an effect of something going wrong and part of error recovery process.
Preceding this event should be the real error/root cause...
We have been working with LSI/Avago to resolve this. We get a bunch of these type log events:
2015-09-04T14:58:59.169677+12:00
<server_name> ceph-osd: - ceph-osd: 2015-09-04
14:58:59.168444 7fbc5ec71700 0 log [WRN] :
slow request 30.894936 seconds old, received at 2015-09-04
14:58:28.272976: osd_op(client.42319583.0:1185218039
rbd_data.1d8a5a92eb141f2.00000000000056a0 [read 3579392~8192] 4.f9f016cb
ack+read e66603) v4 currently no flag points reached
Followed by the task abort I mentioned:
sd 11:0:4:0: attempting task abort! scmd(ffff8804c07d0480)
sd 11:0:4:0: [sdf] CDB:
Write(10): 2a 00 24 6f 01 a8 00 00 08 00
scsi target11:0:4: handle(0x000d), sas_address(0x4433221104000000), phy(4)
scsi target11:0:4: enclosure_logical_id(0x5003048000000000), slot(4)
sd 11:0:4:0: task abort: SUCCESS scmd(ffff8804c07d0480)
LSI had us enable debugging on our card and send them many logs and debugging data. Their response was:
Please do not send in the Synchronize cache command(35h). That’s the one causing the drive from not responding to Read/write commands quick enough.
A Synchronize cache command instructs the ATA device to flush the cache contents to medium and so while the disk is in the process of doing it, it’s probably causing the read/write commands to take longer time to complete.
LSI/Avago believe this to be the root cause of the IO delay based on the debugging info.
and from what I've seen it is not necessary with fast drives (such as S3700).
While I agree with you that it should not be necessary as the S3700's should be very fast, our current experience does not show this to be the case.
Just a little more about our setup. We're using Ceph Firefly (0.80.10) on Ubuntu 14.04. We see this same thing on every S3700/10 on four hosts. We do not see this happening on the spinning disks in the same cluster but different pool on similar hardware.
If you know of any other reason this may be happening, we would appreciate it. Otherwise we will need to continue investigating the possibility of setting nobarriers.
Regards,
Richard