Re: [Octopus] OSD overloading

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What's the CPU busy with while spinning at 100%?

Check "perf top" for a quick overview


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Wed, Apr 8, 2020 at 3:09 PM Jack <ceph@xxxxxxxxxxxxxx> wrote:
>
> I do:
> root@backup1:~# ceph config dump | grep snap_trim_sleep
> global        advanced  osd_snap_trim_sleep
> 60.000000
> global        advanced  osd_snap_trim_sleep_hdd
> 60.000000
>
> (cluster is fully rusty)
>
>
> On 4/8/20 2:53 PM, Dan van der Ster wrote:
> > Do you have a custom value for osd_snap_trim_sleep ?
> >
> > On Wed, Apr 8, 2020 at 2:03 PM Jack <ceph@xxxxxxxxxxxxxx> wrote:
> >>
> >> I put the nosnaptrim during upgrade because I saw high CPU usage and
> >> though it was somehow related to the upgrade process
> >> However, all my daemon are now running Octopus, and the issue is still
> >> here, so I was wrong
> >>
> >>
> >> On 4/8/20 1:58 PM, Wido den Hollander wrote:
> >>>
> >>>
> >>> On 4/8/20 1:38 PM, Jack wrote:
> >>>> Hello,
> >>>>
> >>>> I've a issue, since my Nautilus -> Octopus upgrade
> >>>>
> >>>> My cluster has many rbd images (~3k or something)
> >>>> Each of them has ~30 snapshots
> >>>> Each day, I create and remove a least a snapshot per image
> >>>>
> >>>> Since Octopus, when I remove the "nosnaptrim" flags, each OSDs uses 100%
> >>>> of its CPU time
> >>>
> >>> Why do you have the 'nosnaptrim' flag set? I'm missing that piece of
> >>> information.
> >>>
> >>>> The whole cluster collapses: OSDs no longer see each others, most of
> >>>> them are seens as down ..
> >>>> I do not see any progress being made : it does not appear the problem
> >>>> will solve by itself
> >>>>
> >>>> What can I do ?
> >>>>
> >>>> Best regards,
> >>>> _______________________________________________
> >>>> ceph-users mailing list -- ceph-users@xxxxxxx
> >>>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>>>
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux