Re: cephfs performance issue MDSs report slow requests and osd memory usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Sep 24, 2019 at 4:33 AM Thomas <74cmonty@xxxxxxxxx> wrote:
>
> Hi,
>
> I'm experiencing the same issue with this setting in ceph.conf:
>         osd op queue = wpq
>         osd op queue cut off = high
>
> Furthermore I cannot read any old data in the relevant pool that is
> serving CephFS.
> However, I can write new data and read this new data.

If you restarted all the OSDs with this setting, it won't necessarily
prevent any blocked IO, it just really helps prevent the really long
blocked IO and makes sure that IO is eventually done in a more fair
manner.

It sounds like you may have some MDS issues that are deeper than my
understanding. First thing I'd try is to bounce the MDS service.

> > If I want to add this my ceph-ansible playbook parameters, in which files I should add it and what is the best way to do it ?
> >
> > Add those 3 lines in all.yml or osds.yml ?
> >
> > ceph_conf_overrides:
> >   global:
> >     osd_op_queue_cut_off: high
> >
> > Is there another (better?) way to do that?

I can't speak to either of those approaches. I wanted all my config in
a single file, so I put it in my inventory file, but it looks like you
have the right idea.

----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux