>From the top of my head, it is recommended to use 3 mons in production. Also, for the 22 osds your number of PGs look a bug low, you should look at that. The performance of the cluster is poor - this is too vague. What is your current performance, what benchmarks have you tried, what is your data workload and most importantly, how is your cluster setup. what disks, ssds, network, ram, etc. Please provide more information so that people could help you. Andrei ----- Original Message ----- From: "Mateusz Ska?a" <mateusz.skala@xxxxxxxxxxx> To: ceph-users at lists.ceph.com Sent: Monday, 25 August, 2014 2:39:16 PM Subject: Ceph monitor load, low performance Hello, we have deployed ceph cluster with 4 monitors and 22 osd's. We are using only rbd's. All VM's on KVM have specified monitors in the same order. One of monitors (the first on the list in vm disk specification - ceph35) has more load than others and the performance of cluster is poor. How can we fix this problem. Here is 'ceph -s' output: cluster a9d17295-UUID-1cad7724e97f health HEALTH_OK monmap e4: 4 mons at {ceph15=IP.15:6789/0,ceph25=IP.25:6789/0,ceph30=IP.30:6789/0,ceph35=IP.35:6789/0}, election epoch 5750, quorum 0,1,2,3 ceph15,ceph25,ceph30,ceph35 osdmap e7376: 22 osds: 22 up, 22 in pgmap v3387277: 3072 pgs, 3 pools, 2306 GB data, 587 kobjects 6997 GB used, 12270 GB / 19267 GB avail 3071 active+clean 1 active+clean+scrubbing client io 14849 B/s rd, 2887 kB/s wr, 1044 op/s Thanks for help, -- Best Regards Mateusz _______________________________________________ ceph-users mailing list ceph-users at lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com