Re: Ceph OSD imbalance and performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den tis 28 feb. 2023 kl 18:13 skrev Dave Ingram <dave@xxxxxxxxxxxx>:
> There are also several
> scrub errors. In short, it's a complete wreck.
>
>     health: HEALTH_ERR
>             3 scrub errors
>             Possible data damage: 3 pgs inconsistent


> [root@ceph-admin davei]# ceph health detail
> HEALTH_ERR 3 scrub errors; Possible data damage: 3 pgs inconsistent
> OSD_SCRUB_ERRORS 3 scrub errors
> PG_DAMAGED Possible data damage: 3 pgs inconsistent
>     pg 2.8a is active+clean+inconsistent, acting [13,152,127]
>     pg 2.ce is active+clean+inconsistent, acting [145,13,152]
>     pg 2.e8 is active+clean+inconsistent, acting [150,162,42]

You can ask the cluster to repair those three,
"ceph pg repair 2.8a"
"ceph pg repair 2.ce"
"ceph pg repair 2.e8"

and they should start fixing themselves.

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux