On Fri, Mar 31, 2017 at 9:43 AM, Alexandre Blanca <alexandre.blanca@xxxxxxxx> wrote: > Hi, > > After prepare and activate my OSDs I create my cephFS : > > ceph fs new cephfs1 metadata1 data1 > new fs with metadata pool 11 and data pool 10 > > ceph osd pool ls > data1 > metadata1 > > ceph fs ls > name: cephfs1, metadata pool: metadata1, data pools: [data1 ] > > ceph mds stat > e65: 1/1/1 up {0=sfd-serv1=up:creating} > > ceph status > health HEALTH_ERR > 86 pgs are stuck inactive for more than 300 seconds > 170 pgs degraded > 170 pgs stuck degraded > 86 pgs stuck inactive > 256 pgs stuck unclean > 170 pgs stuck undersized > 170 pgs undersized > recovery 9/18 objects degraded (50.000%) ^^^ look at all this Your CephFS filesystem won't leave creating state until it can actually write to the RADOS cluster. Something is badly wrong with how you've configured your OSDs/pools. John > monmap e1: 1 mons at {sfd-serv1=194.199.24.58:6789/0} > election epoch 5, quorum 0 sfd-serv1 > fsmap e65: 1/1/1 up {0=sfd-serv1=up:creating} > osdmap e167: 2 osds: 2 up, 2 in > flags sortbitwise,require_jewel_osds > pgmap v13653: 256 pgs, 2 pools, 554 bytes data, 9 objects > 10323 MB used, 372 GB / 382 GB avail > 9/18 objects degraded (50.000%) > 170 active+undersized+degraded > 86 creating > > How can i set my fsmap creating state to activate state ? > > Thanks, > > Alexandre > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com