Re: [Octopus] OSD overloading

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Do you have a custom value for osd_snap_trim_sleep ?

On Wed, Apr 8, 2020 at 2:03 PM Jack <ceph@xxxxxxxxxxxxxx> wrote:
>
> I put the nosnaptrim during upgrade because I saw high CPU usage and
> though it was somehow related to the upgrade process
> However, all my daemon are now running Octopus, and the issue is still
> here, so I was wrong
>
>
> On 4/8/20 1:58 PM, Wido den Hollander wrote:
> >
> >
> > On 4/8/20 1:38 PM, Jack wrote:
> >> Hello,
> >>
> >> I've a issue, since my Nautilus -> Octopus upgrade
> >>
> >> My cluster has many rbd images (~3k or something)
> >> Each of them has ~30 snapshots
> >> Each day, I create and remove a least a snapshot per image
> >>
> >> Since Octopus, when I remove the "nosnaptrim" flags, each OSDs uses 100%
> >> of its CPU time
> >
> > Why do you have the 'nosnaptrim' flag set? I'm missing that piece of
> > information.
> >
> >> The whole cluster collapses: OSDs no longer see each others, most of
> >> them are seens as down ..
> >> I do not see any progress being made : it does not appear the problem
> >> will solve by itself
> >>
> >> What can I do ?
> >>
> >> Best regards,
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux