Hi The lastest versión of ceph is not reporting anymore slowops in dashboard and cli? Bug? Or expected? ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable) Linux 3.10.0-957.12.1.el7.x86_64 #1 SMP Mon Apr 29 14:59:59 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux 2019-05-13 16:48:27.536 7f38111a4700 -1 osd.5 129902 get_health_metrics reporting 136 slow ops, oldest is osd_op(client.15648485.0:469762 39.79 39:9e06324d:::48efb8c3-693c-4fe0-bbe4-fdc16f590a82.3886182.18__multipart_MBS-6736e395-d1ca-43a7-8098-324ef41f3881%2fCBB_BIM-IIS%2fCBB_DiskImage%2fDisk_00000000-0000-0000-0000-000000000000%2fVolume_NTFS_00000000-0000-0000-0000-000000000001$%2f20190512220203%2f92.cbrevision.2~gmBBOM5CIdaevVmz6Stp2ot8UguMF7H.5:head [create,writefull 0~4194304] snapc 0=[] ondisk+write+known_if_redirected e129417) 2019-05-13 16:48:28.510 7f38111a4700 -1 osd.5 129902 get_health_metrics reporting 136 slow ops, oldest is osd_op(client.15648485.0:469762 39.79 39:9e06324d:::48efb8c3-693c-4fe0-bbe4-fdc16f590a82.3886182.18__multipart_MBS-6736e395-d1ca-43a7-8098-324ef41f3881%2fCBB_BIM-IIS%2fCBB_DiskImage%2fDisk_00000000-0000-0000-0000-000000000000%2fVolume_NTFS_00000000-0000-0000-0000-000000000001$%2f20190512220203%2f92.cbrevision.2~gmBBOM5CIdaevVmz6Stp2ot8UguMF7H.5:head [create,writefull 0~4194304] snapc 0=[] ondisk+write+known_if_redirected e129417) 2019-05-13 16:48:29.508 7f38111a4700 -1 osd.5 129902 get_health_metrics reporting 136 slow ops, oldest is osd_op(client.15648485.0:469762 39.79 39:9e06324d:::48efb8c3-693c-4fe0-bbe4-fdc16f590a82.3886182.18__multipart_MBS-6736e395-d1ca-43a7-8098-324ef41f3881%2fCBB_BIM-IIS%2fCBB_DiskImage%2fDisk_00000000-0000-0000-0000-000000000000%2fVolume_NTFS_00000000-0000-0000-0000-000000000001$%2f20190512220203%2f92.cbrevision.2~gmBBOM5CIdaevVmz6Stp2ot8UguMF7H.5:head [create,writefull 0~4194304] snapc 0=[] ondisk+write+known_if_redirected e129417) 2019-05-13 16:48:30.509 7f38111a4700 -1 osd.5 129902 get_health_metrics reporting 136 slow ops, oldest is osd_op(client.15648485.0:469762 39.79 39:9e06324d:::48efb8c3-693c-4fe0-bbe4-fdc16f590a82.3886182.18__multipart_MBS-6736e395-d1ca-43a7-8098-324ef41f3881%2fCBB_BIM-IIS%2fCBB_DiskImage%2fDisk_00000000-0000-0000-0000-000000000000%2fVolume_NTFS_00000000-0000-0000-0000-000000000001$%2f20190512220203%2f92.cbrevision.2~gmBBOM5CIdaevVmz6Stp2ot8UguMF7H.5:head [create,writefull 0~4194304] snapc 0=[] ondisk+write+known_if_redirected e129417) 2019-05-13 16:48:31.489 7f38111a4700 -1 osd.5 129902 get_health_metrics reporting 136 slow ops, oldest is osd_op(client.15648485.0:469762 39.79 39:9e06324d:::48efb8c3-693c-4fe0-bbe4-fdc16f590a82.3886182.18__multipart_MBS-6736e395-d1ca-43a7-8098-324ef41f3881%2fCBB_BIM-IIS%2fCBB_DiskImage%2fDisk_00000000-0000-0000-0000-000000000000%2fVolume_NTFS_00000000-0000-0000-0000-000000000001$%2f20190512220203%2f92.cbrevision.2~gmBBOM5CIdaevVmz6Stp2ot8UguMF7H.5:head [create,writefull 0~4194304] snapc 0=[] ondisk+write+known_if_redirected e129417) [root@CEPH001 ~]# ceph -s cluster: id: e1ee8086-7cce-43fd-a252-3d677af22428 health: HEALTH_WARN noscrub,nodeep-scrub flag(s) set 2663 pgs not deep-scrubbed in time 3245 pgs not scrubbed in time services: mon: 3 daemons, quorum CEPH001,CEPH002,CEPH003 (age 110m) mgr: CEPH001(active, since 83m) osd: 120 osds: 120 up (since 65s), 120 in (since 22h) flags noscrub,nodeep-scrub rgw: 1 daemon active (ceph-rgw03) data: pools: 17 pools, 9336 pgs objects: 112.63M objects, 284 TiB usage: 541 TiB used, 144 TiB / 684 TiB avail pgs: 9336 active+clean io: client: 666 MiB/s rd, 81 MiB/s wr, 3.30k op/s rd, 969 op/s wr [root@CEPH001 ~]# ceph health HEALTH_WARN noscrub,nodeep-scrub flag(s) set; 2663 pgs not deep-scrubbed in time; 3245 pgs not scrubbed in time [root@CEPH001 ~]# |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com