Re: Ceph OSD imbalance and performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



When I suggested this to the senior admin here I was told that was a bad
idea because it would negatively impact performance.

Is that true? I thought all that would do was accept the information from
the other 2 OSDs and the one with the errors would rebuild the record.

The underlying disks don't appear to have actual catastrophic errors based
on smartctl and other tools.

On Tue, Feb 28, 2023 at 12:21 PM Janne Johansson <icepic.dz@xxxxxxxxx>
wrote:

> Den tis 28 feb. 2023 kl 18:13 skrev Dave Ingram <dave@xxxxxxxxxxxx>:
> > There are also several
> > scrub errors. In short, it's a complete wreck.
> >
> >     health: HEALTH_ERR
> >             3 scrub errors
> >             Possible data damage: 3 pgs inconsistent
>
>
> > [root@ceph-admin davei]# ceph health detail
> > HEALTH_ERR 3 scrub errors; Possible data damage: 3 pgs inconsistent
> > OSD_SCRUB_ERRORS 3 scrub errors
> > PG_DAMAGED Possible data damage: 3 pgs inconsistent
> >     pg 2.8a is active+clean+inconsistent, acting [13,152,127]
> >     pg 2.ce is active+clean+inconsistent, acting [145,13,152]
> >     pg 2.e8 is active+clean+inconsistent, acting [150,162,42]
>
> You can ask the cluster to repair those three,
> "ceph pg repair 2.8a"
> "ceph pg repair 2.ce"
> "ceph pg repair 2.e8"
>
> and they should start fixing themselves.
>
> --
> May the most significant bit of your life be positive.
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux