Re: ceph mds slow requests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Can these also be set with 'ceph tell'

No, those options can't be injected, you have to restart the OSDs.


Zitat von Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx>:

Can these also be set with 'ceph tell'


-----Original Message-----
From: Andrej Filipcic [mailto:andrej.filipcic@xxxxxx]
Sent: woensdag 10 juni 2020 12:22
To: ceph-users@xxxxxxx
Subject:  Re: ceph mds slow requests


Hi,

all our slow request issues were solved with:
[osd]
   osd op queue = wpq
   osd op queue cut off = high

before we even had  several hours old request, since the change it
rarely gets above 30s even with the heaviest loads, eg >100 iops/hdd

Regards,
Andrej

On 6/10/20 12:12 PM, Eugen Block wrote:
Hi,

we have this message almost daily, although in our case it's almost
expected. We run a nightly compile job within a cephfs subtree and the

OSDs (HDD with rocksDB on SSD) are saturated during those jobs. Also
the deep-scrubs which also run during the night have a significant
impact and the cluster reports slow requests, but since that happens
outside our working hours we can live with it (for now).

You write the OSDs are on SSDs, is that true for both data and
metadata pool?

Regards,
Eugen


Zitat von locallocal <locallocal@xxxxxxx>:

hi,guys.
we have a ceph cluster which version is luminous 12.2.13. and
Recently we encountered a problem.here are some log infomations:


2020-06-08 12:33:52.706070 7f4097e2d700  0 log_channel(cluster) log
[WRN] : slow request 30.518930 seconds old, received at 2020-06-08
12:33:22.186924: client_request(client.48978906:941633993 create
#0x100028cab8a/.filename 2020-06-08 12:33:22.197434 caller_uid=0,
caller_gid=0{}) currently submit entry: journal_and_reply ...
2020-06-08 13:12:17.826727 7f4097e2d700  0 log_channel(cluster) log
[WRN] : slow request 2220.991833 seconds old, received at 2020-06-08
12:35:16.764233: client_request(client.42390705:788369155 create
#0x1000224f999/.filename 2020-06-08 12:35:16.774553 caller_uid=0,
caller_gid=0{}) currently submit entry: journal_and_reply


it looks like mds can't flush journal to osd of meta pool.but the osd

type is ssd and the load is very low.this problem leads the client
can't mount and the mds can't trim log.
Is there anyone have encountered this problem.Please help!

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
email to ceph-users-leave@xxxxxxx


--
_____________________________________________________________
    prof. dr. Andrej Filipcic,   E-mail: Andrej.Filipcic@xxxxxx
    Department of Experimental High Energy Physics - F9
    Jozef Stefan Institute, Jamova 39, P.o.Box 3000
    SI-1001 Ljubljana, Slovenia
    Tel.: +386-1-477-3674    Fax: +386-1-477-3166
-------------------------------------------------------------
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux