Changing the fsid of a ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have changed the fsid of a ceph cluster by redeploying it with ceph-ansible (the change was intentional, the cluster is new and empty).

After the change, I had to restart all the OSDs (with start ceph-osd id=x on each node).

Now the cluster seems to work, but I have two issues:

1. The ID reported when I run ceph status is still the old one. In every ceph.conf file I have the new one.
2. I have a warning that I didn't have before: too many PGs per OSD (1944 > max 300). I have 10 pools with 512 PGs each and one with 64 and a total of 8 OSDs on 8 different hosts. Shouldn't the total number of PGs per OSD be 648? 1944 is three times this value, does it have to do with the replication size of 3?

Thanks if you can help me clarifying my two issues.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux