test cluster running on vmware fusion. all 3 nodes are both monitor and osd, and are running opentpd
$ ansible ceph1 -a "ceph -s"
ceph1 | SUCCESS | rc=0 >>
cluster d7d2a02c-915f-4725-8d8d-8d42fcd87242
health HEALTH_WARN
clock skew detected on mon.ceph2, mon.ceph3
Monitor clock skew detected
monmap e1: 3 mons at {ceph1=192.168.113.31:6789/0,ceph2=192.168.113.32:6789/0,ceph3=192.168.113.33:6789/0}
election epoch 4, quorum 0,1,2 ceph1,ceph2,ceph3
osdmap e7: 3 osds: 3 up, 3 in
flags sortbitwise
pgmap v16: 64 pgs, 1 pools, 0 bytes data, 0 objects
102328 kB used, 289 GB / 289 GB avail
64 active+clean
$ ansible ceph -a date
ceph1 | SUCCESS | rc=0 >>
Tue Jun 7 21:36:37 PDT 2016
ceph2 | SUCCESS | rc=0 >>
Tue Jun 7 21:36:37 PDT 2016
ceph3 | SUCCESS | rc=0 >>
Tue Jun 7 21:36:37 PDT 2016
$ ansible ceph1 -a "ceph -s"
ceph1 | SUCCESS | rc=0 >>
cluster d7d2a02c-915f-4725-8d8d-8d42fcd87242
health HEALTH_WARN
clock skew detected on mon.ceph2, mon.ceph3
Monitor clock skew detected
monmap e1: 3 mons at {ceph1=192.168.113.31:6789/0,ceph2=192.168.113.32:6789/0,ceph3=192.168.113.33:6789/0}
election epoch 4, quorum 0,1,2 ceph1,ceph2,ceph3
osdmap e7: 3 osds: 3 up, 3 in
flags sortbitwise
pgmap v16: 64 pgs, 1 pools, 0 bytes data, 0 objects
102328 kB used, 289 GB / 289 GB avail
64 active+clean
$ ansible ceph -a date
ceph1 | SUCCESS | rc=0 >>
Tue Jun 7 21:36:37 PDT 2016
ceph2 | SUCCESS | rc=0 >>
Tue Jun 7 21:36:37 PDT 2016
ceph3 | SUCCESS | rc=0 >>
Tue Jun 7 21:36:37 PDT 2016
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com