Re: Changes to Crush Weight Causing Degraded PGs instead of Remapped

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I remember someone reporting the same thing but I can’t find the thread right now. I’ll try again tomorrow.

Zitat von Wesley Dillingham <wes@xxxxxxxxxxxxxxxxx>:

I have a brand new Cluster 16.2.9 running bluestore with 0 client activity.
I am modifying some crush weights to move PGs off of a host for testing
purposes but the result is that the PGs go into a degraded+remapped state
instead of simply a remapped state. This is a strange result to me as in
previous releases (nautilus) this would cause only Remapped PGs. Are there
any known issues around this? Are others running Pacific seeing similar
behavior? Thanks.

"ceph osd crush reweight osd.1 0"

^ Causes degraded PGs which then go into recovery. Expect only remapped PGs

Respectfully,

*Wes Dillingham*
wes@xxxxxxxxxxxxxxxxx
LinkedIn <http://www.linkedin.com/in/wesleydillingham>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux