Re: How to set a new Crushmap in production

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

I'm not aware of a way of slowing things down other then modifying
osd_max_backfills, osd_backfill_scan_{min,max}, and
osd_recovery_max_activate as mentioned in [1]. The nature of injecting
a new CRUSH map us usually the result of several changes and I will do
this to prevent several restarts of backfills when a number of changes
needs to happen. I don't think setting noout will do anything for you
because your OSDs will not be going down with a CRUSH change. I didn't
realize that you could change the CRUSH rule on an existing pool, but
it is in the man page. You learn something new everyday.


[1] https://www.mail-archive.com/ceph-users@xxxxxxxxxxxxxx/msg26017.html
- ----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Wed, Jan 20, 2016 at 7:11 AM, Vincent Godin  wrote:
> Hi,
>
> I need to import a new crushmap in production (the old one is the default
> one) to define two datacenters and to isolate SSD from SATA disk. What is
> the best way to do this without starting an hurricane on the platform ?
>
> Till now, i was just using hosts (SATA OSD) on one datacenter with the
> default rule so i create a new rule in the new crushmap to do the same job
> on one datacenter on a defined SATA chassis. Here is the process I'm going
> to follow but i really need your advice :
>
> 1 - set the noout flag
> 2 - import the new crushmap
> 3 - change the rule number for the existing pool to the new one
> 4 - unset the noout flag
> 5- pray ...
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

-----BEGIN PGP SIGNATURE-----
Version: Mailvelope v1.3.3
Comment: https://www.mailvelope.com

wsFcBAEBCAAQBQJWn7y8CRDmVDuy+mK58QAARAkQAKYdhhwzKAyCm4Fwv+4O
aWjLRqoqaJHVgKHZ8LigNlesFzxeB00nEsysUDsU/AoAzR+4RPYuFKneosYV
HY8Uri4QmChG0JAy/Dh/FffpH2LUmQJ2broo2p31V2ljLIgQl+Hd+8cf9hG/
muZ5DChfj4cRMmoWCcEltt6Oc23O1zGhi5VQRh1LY60jAA/EuVL0XZBLiMcU
Pio7RwH1ZrlJQnuorXEiZY31cgNRrd4UzdQlEMXBRPzU1aj0Tgr2mHikCv59
7Fi7iI0VQLI9LD4HpX84pBahFbHamrw1EI37QaYXJrEdRQmht1YIQJpD2eso
3K3fcuCsfKYCweRydpPAWlzZfeo400CN1qunwM0Bxcm54rvRTju81YzY1yv7
TH7DGphuOeOBRp+7utQzZ2uil1iTDMqNSMJ5tdPBWETqzxULuJKGX1uzCM/Y
zeE9wEfrKax3agYyi9cCqPTT9KhYB8BsPFAobO53a2j/c1dnqvIA0ToqEUyO
kqB0Ze7rG8ZOLKgRkj/ACqC14RnMBBVR3DtmQ6Lfs3aiokUx5IzAp8pR5JI4
J32uCAUVSuUXTmnrozFaxgLgel0HM9XqPiOeXlp2gfuukeb+ENfzNfJk2zTn
cwdf3HyjapRXtZKaHa6XEhoTuqznKDbOAdTlyxlvm/SfR84BW00HbXxAPa/G
/sFU
=sN8w
-----END PGP SIGNATURE-----
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux