Re: Setting temporary CRUSH "constraint" for planned cross-datacenter downtime

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Joachim,

I'm currently looking for the general methodology and if it's possible without rebalancing everything.

But of course I'd also appreciate tips directly for my deployment; here is the info:

Ceph 18, Simple 3-replication (osd_pool_default_size = 3, default CRUSH rules Ceph creates for that).

Failure domains from `ceph osd tree`:

root default
    region FSN
        zone FSN1
            datacenter FSN1-DC1
                host machine-1
                    osd.0
                    ... 10 OSDs per datacenter
                ... currently 1 machine per datacenter
            datacenter FSN1-DC2
                host machine-2
                    ...
            ... currently 8 datacenters

I already tried simply

    ceph osd crush move machine-1 datacenter=FSN1-DC2

to "simulate" that DC1 and DC2 are temporarily the same failure domain (machine-1 is the only machine in DC1 currently), but that immediately causes 33% of objects to be misplaced -- much more movement than I'd hope for and more than would be needed (I'd expect 12.5% would need to be moved given that 1 out of 8 DCs needs to be moved).

Thanks!
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux