Re: PGs stuck in unkown state

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/20/21 07:51, Mr. Gecko wrote:
Hello,

I'll start by explaining what I have done. I was adding some new storage in attempt to setup a cache pool according to https://docs.ceph.com/en/latest/dev/cache-pool/ by doing the following.

1. I upgraded all servers in cluster to ceph 15.2.14 which put the system into recovery for out of sync data. 2. I added 2 SSDs as OSDs to the cluster which immediately cause ceph to balance onto the SSDs.
3. I added 2 new crush rules which map to SSD storage vs HDD storage.\

I guess this is were things go wrong. Have you tested the CRUSH rules beforehand? To see if the right OSDs get mapped, or any at all.

I would revert the crush rule change for now to try get your PGs active+clean.

If that works, than try to find out (with crushtool for example) why the new CRUSH rule sets do not map the OSDs.

Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux