Re: Clients failing to advance oldest client?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you! The OSD/mon/mgr/MDS servers are on 18.2.1, and the clients are mostly 17.2.6.

-erich

On 3/25/24 11:57 PM, Dhairya Parmar wrote:
I think this bug has already been worked on in https://tracker.ceph.com/issues/63364 <https://tracker.ceph.com/issues/63364>, can you tell which version you're on?

--
*Dhairya Parmar*

Associate Software Engineer, CephFS

IBM, Inc.



On Tue, Mar 26, 2024 at 2:32 AM Erich Weiler <weiler@xxxxxxxxxxxx <mailto:weiler@xxxxxxxxxxxx>> wrote:

    Hi Y'all,

    I'm seeing this warning via 'ceph -s' (this is on Reef):

    # ceph -s
        cluster:
          id:     58bde08a-d7ed-11ee-9098-506b4b4da440
          health: HEALTH_WARN
                  3 clients failing to advance oldest client/flush tid
                  1 MDSs report slow requests
                  1 MDSs behind on trimming

        services:
          mon: 5 daemons, quorum
    pr-md-01,pr-md-02,pr-store-01,pr-store-02,pr-md-03 (age 3d)
          mgr: pr-md-01.jemmdf(active, since 3w), standbys: pr-md-02.emffhz
          mds: 1/1 daemons up, 1 standby
          osd: 46 osds: 46 up (since 3d), 46 in (since 2w)

        data:
          volumes: 1/1 healthy
          pools:   4 pools, 1313 pgs
          objects: 258.13M objects, 454 TiB
          usage:   688 TiB used, 441 TiB / 1.1 PiB avail
          pgs:     1303 active+clean
                   8    active+clean+scrubbing
                   2    active+clean+scrubbing+deep

        io:
          client:   131 MiB/s rd, 111 MiB/s wr, 41 op/s rd, 613 op/s wr

    I googled around and looked at the docs and it seems like this isn't a
    critical problem, but I couldn't find a clear path to resolution.  Does
    anyone have any advice on what I can do to resolve the health issues
    up top?

    My CephFS filesystem is incredibly busy so I have a feeling that has
    some impact here, but not 100% sure...

    Thanks as always for the help!

    cheers,
    erich
    _______________________________________________
    ceph-users mailing list -- ceph-users@xxxxxxx
    <mailto:ceph-users@xxxxxxx>
    To unsubscribe send an email to ceph-users-leave@xxxxxxx
    <mailto:ceph-users-leave@xxxxxxx>

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux