Re: Is it normal Ceph reports "Degraded data redundancy" in normal use?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If you mark an osd "out" but not down / you dont stop the daemon do the PGs
go remapped or do they go degraded then as well?

Respectfully,

*Wes Dillingham*
wes@xxxxxxxxxxxxxxxxx
LinkedIn <http://www.linkedin.com/in/wesleydillingham>


On Thu, Apr 14, 2022 at 5:15 AM Kai Stian Olstad <ceph+list@xxxxxxxxxx>
wrote:

> On 29.03.2022 14:56, Sandor Zeestraten wrote:
> > I was wondering if you ever found out anything more about this issue.
>
> Unfortunately no, so I turned it off.
>
>
> > I am running into similar degradation issues while running rados bench
> > on a
> > new 16.2.6 cluster.
> > In our case it's with a replicated pool, but the degradation problems
> > also
> > go away when we turn off the balancer.
>
> So this goes a long way of confirming there are something wrong with the
> balancer since we now see it on two different installation.
>
>
> --
> Kai Stian Olstad
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux