Hello,
Could you include the monitors and the osds as well to your clock
skew test?
How did you create the osds? ceph-deploy osd create osd1:/dev/sdX
osd2:/dev/sdY osd3: /dev/sdZ ?
Some log from one of the osds would be great!
Kind regards,
Denes.
On 10/14/2017 07:39 PM, dE wrote:
On 10/14/2017 08:18 PM, David Turner
wrote:
What are the ownership permissions on your osd
folders? Clock skew cares about partial seconds.
It isn't the networking issue because your cluster
isn't stuck peering. I'm not sure if the creating state
happens in disk or in the cluster.
I attached 1TB disks to each osd.
cluster 8161c90e-dbd2-4491-acf8-74449bef916a
health HEALTH_ERR
clock skew detected on mon.1, mon.2
64 pgs are stuck inactive for more than 300
seconds
64 pgs stuck inactive
too few PGs per OSD (21 < min 30)
Monitor clock skew detected
monmap e1: 3 mons at {0= 10.247.103.139:8567/0,1=10.247.103.140:8567/0,2=10.247.103.141:8567/0}
election epoch 12, quorum 0,1,2 0,1,2
osdmap e10: 3 osds: 3 up, 3 in
flags sortbitwise,require_jewel_osds
pgmap v38: 64 pgs, 1 pools, 0 bytes data, 0
objects
33963 MB used, 3037 GB / 3070 GB avail
64 creating
I dont seem to have any clock skews --
or i in {139..141}; do ssh $i date +%s; done
1507989554
1507989554
1507989554
ceph:root. I tried ceph:ceph, and also ran ceph-osd as root.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com