With the low number of OSDs, you are probably satuarting the disks. Check with `iostat -xd 2` and see what the utilization of your disks are. A lot of SSDs don't perform well with Ceph's heavy sync writes and performance is terrible.
If some of your drives are 100% while others are lower utilization, you can possibly get more performance and greatly reduce the blocked I/O with the WPQ scheduler. In the ceph.conf add this to the [osd] section and restart the processes:
osd op queue = wpq
osd op queue cut off = high
osd op queue cut off = high
This has helped our clusters with fairness between OSDs and making backfills not so disruptive.
----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Thu, Jun 6, 2019 at 1:43 AM BASSAGET Cédric <cedric.bassaget.ml@xxxxxxxxx> wrote:
Hello,_______________________________________________I see messages related to REQUEST_SLOW a few times per day.here's my ceph -s :root@ceph-pa2-1:/etc/ceph# ceph -s
cluster:
id: 72d94815-f057-4127-8914-448dfd25f5bc
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-pa2-1,ceph-pa2-2,ceph-pa2-3
mgr: ceph-pa2-3(active), standbys: ceph-pa2-1, ceph-pa2-2
osd: 6 osds: 6 up, 6 in
data:
pools: 1 pools, 256 pgs
objects: 408.79k objects, 1.49TiB
usage: 4.44TiB used, 37.5TiB / 41.9TiB avail
pgs: 256 active+clean
io:
client: 8.00KiB/s rd, 17.2MiB/s wr, 1op/s rd, 546op/s wr
Running ceph version 12.2.9 (9e300932ef8a8916fb3fda78c58691a6ab0f4217) luminous (stable)I've check :- all my network stack : OK ( 2*10G LAG )- memory usage : ok (256G on each host, about 2% used per osd)- cpu usage : OK (Intel(R) Xeon(R) CPU E5-2678 v3 @ 2.50GHz)- disk status : OK (SAMSUNG AREA7680S5xnNTRI 3P04 => samsung DC series)I heard on IRC that it can be related to samsung PM / SM series.Do anybody here is facing the same problem ? What can I do to solve that ?Regards,Cédric
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com