Re: Adding Rack to crushmap - Rebalancing multiple PB of data - advice/experience

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry I was not precise enough.

I will create 7 racks, the command I wrote was just there to show which one I would use.
I will distribute 8 hosts into each rack.

All pools are currently set to use the same default_replicated rule with 3 replicas. I have no EC Pool, RGW pool, but I have CephFS pools (with 3 replicas on both data and metadata)

The OSD media is HHD all across, with separate 25G interface for public and cluster network.
Client workload is 24/7/365

Ceph release is 16.2.15 (deployed/managed with ceph-ansible)
Mon DBs are placed on NVME

First time hearing about upmap-remapped.py, thanks- I will read up on it.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux