Re: Working ceph cluster reports large amount of pgs in state unknown/undersized and objects degraded

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Tobias,

April 18, 2024 at 8:08 PM, "Tobias Langner" <tlangner+ceph@xxxxxxxxxxxx> wrote:



> 
> We operate a tiny ceph cluster (v16.2.7) across three machines, each 
> 
> running two OSDs and one of each mds, mgr, and mon. The cluster serves 
> 
> one main erasure-coded (2+1) storage pool and a few other 
I'd assume (w/o pool config) that the EC 2+1 is putting PG as inactive. Because for EC you need n-2 for redundancy and n-1 for availability.

The output got a bit mangled. Could you please provide them in some pastebin maybe?

Can you please post the crush rule and pool settings? To better understand the data distribution. And what does the logs show on one of the affected OSDs?

Cheers,
Alwin
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux