Re: Nautilus pg autoscale, data lost?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Some time ago on Luminous I also had to change the crush rules on a all
hdd cluster to hdd (to prepare for adding ssd's and ssd pools). And pg's
started migrating while everything already was on hdd's, looks like this
is still not fixed?

Sage responded to a thread yesterday, how to change crush device classes without rebalancing (crushtool reclassify):

https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/675QZ2JXXX4RPRNPK2NL7FB5MVANKUB2/


Zitat von Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx>:

Some time ago on Luminous I also had to change the crush rules on a all
hdd cluster to hdd (to prepare for adding ssd's and ssd pools). And pg's
started migrating while everything already was on hdd's, looks like this
is still not fixed?





-----Original Message-----
From: Raymond Berg Hansen [mailto:raymondbh@xxxxxxxxx]
Sent: dinsdag 1 oktober 2019 14:32
To: ceph-users@xxxxxxx
Subject:  Re: Nautilus pg autoscale, data lost?

You are absolutly right, I had made a crush rule for device class hdd.
Did not put this in connection with this problem. When I put the pools
back in the default crush rule things are starting to fix itself it
seems.
Have I done something wrong with this crush rule?

# rules
rule replicated_rule {
	id 0
	type replicated
	min_size 1
	max_size 10
	step take default
	step chooseleaf firstn 0 type host
	step emit
}
rule replicated-hdd {
	id 1
	type replicated
	min_size 1
	max_size 10
	step take default class hdd
	step chooseleaf firstn 0 type datacenter
	step emit
}
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux