Re: Scrubbing?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ah, yeah, you hit https://tracker.ceph.com/issues/63389 during the upgrade.

Josh

On Tue, Jan 30, 2024 at 3:17 AM Jan Marek <jmarek@xxxxxx> wrote:
>
> Hello again,
>
> I'm sorry, I forgot attach file... :-(
>
> Sincerely
> Jan
>
> Dne Út, led 30, 2024 at 11:09:44 CET napsal(a) Jan Marek:
> > Hello Sridhar,
> >
> > at Saturday I've finished upgrade proces to 18.2.1.
> >
> > Cluster is now in HEALTH_OK state and performs well.
> >
> > According to my colleagues there are lower latences and good
> > throughput.
> >
> > On OSD nodes there is relative low I/O activity.
> >
> > I still have mClock profile "high_client_ops".
> >
> > When I was stucked in the upgrade process, I had in logs so many
> > records, see attached file. Since upgrade is complete, this
> > messages went away... Can be this reason of poor
> > performance?
> >
> > Sincerely
> > Jan Marek
> >
> > Dne Čt, led 25, 2024 at 02:31:41 CET napsal(a) Jan Marek:
> > > Hello Sridhar,
> > >
> > > Dne Čt, led 25, 2024 at 09:53:26 CET napsal(a) Sridhar Seshasayee:
> > > > Hello Jan,
> > > >
> > > > Meaning of my previous post was, that CEPH cluster didn't fulfill
> > > > my needs and, although I had set mClock profile to
> > > > "high_client_ops" (because I have a plenty of time to rebalancing
> > > > and scrubbing), my clients went to problems.
> > > >
> > > > As far as the question around mClock is concerned, there are further
> > > > improvements in the works to handle QoS between client ops and
> > > > background scrub ops. This should help address the issue you are
> > > > currently facing. See PR: https://github.com/ceph/ceph/pull/51171
> > > > for more information.
> > > > Also, it would be helpful to know the Ceph version you are currently using.
> > >
> > > thanks for your reply.
> > >
> > > I've just in process upgrade between 17.2.6 and 18.2.1 (you can
> > > see my previous posts about stuck in upgrade to reef).
> > >
> > > Maybe this was cause of my problem...
> > >
> > > Now I've tried give rest to the cluster to do some "background"
> > > tasks (and it seems, that this was correct, because on my hosts
> > > there is around 50-100MBps read and cca 10-50MBps write traffic -
> > > cca 1/4-1/2 of previous load).
> > >
> > > At Saturday I will change some settings on networking and I will
> > > try to start upgrade process, maybe with --limit=1, to be "soft"
> > > for cluster and for our clients...
> > >
> > > > -Sridhar
> > >
> > > Sincerely
> > > Jan Marek
> > > --
> > > Ing. Jan Marek
> > > University of South Bohemia
> > > Academic Computer Centre
> > > Phone: +420389032080
> > > http://www.gnu.org/philosophy/no-word-attachments.cs.html
> >
> >
> >
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> >
> > --
> > Ing. Jan Marek
> > University of South Bohemia
> > Academic Computer Centre
> > Phone: +420389032080
> > http://www.gnu.org/philosophy/no-word-attachments.cs.html
>
>
>
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
> --
> Ing. Jan Marek
> University of South Bohemia
> Academic Computer Centre
> Phone: +420389032080
> http://www.gnu.org/philosophy/no-word-attachments.cs.html
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux