Hi ... Thanks. i set --debug ms = 0. The result is HEALTH_OK ...but i get an error when trying to setup an client access to the cluster CEPHFS!!! ---------------------------------------------------------------------------------------- I tried setting up another server which should act as client.. - i install ceph on it. - got the configuration file from the cluster servers to the new server ... /etc/ceph/ceph.conf -i did ... mkdir /mnt/mycephfs -i copied the key from ceph.keyring and used it in the command below -i try to run this command: mount -t ceph 172.16.0.25:6789:/ /mnt/mycephfs -o name=admin,secret=AQCqnw9RmBPpOBAAeJjkIgKGYvyRGlekTpUPog== Here is the result i got::: [root@testclient]# mount -t ceph 172.16.0.25:6789:/ /mnt/mycephfs -o name=admin,secret=AQCqnw9RmBPpOBAAeJjkIgKGYvyRGlekTpUPog== FATAL: Module ceph not found. mount.ceph: modprobe failed, exit status 1 mount error: ceph filesystem not supported by the system Regards, Femi. On Mon, Feb 4, 2013 at 3:27 PM, Joao Eduardo Luis <joao.luis@xxxxxxxxxxx> wrote: > This wasn't obvious due to all the debug being outputted, but here's why > 'ceph health' wasn't replying with HEALTH_OK: > > > On 02/04/2013 12:21 PM, femi anjorin wrote: >> >> 2013-02-04 12:56:15.818985 7f149bfff700 1 HEALTH_WARN 4987 pgs >> peering; 4987 pgs stuck inactive; 5109 pgs stuck unclean > > > Furthermore, on your other email in which you ran 'ceph health detail', this > appear to have gone away, as it is replying with HEALTH_OK again. > > You might want to set '--debug-ms 0' when you run 'ceph', or set it in your > ceph.conf, leaving it at a higher level only for daemons (i.e., under [mds], > [mon], [osd]...). The resulting output will be clearer and more easily > understandable. > > -Joao > -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html