On Tue, 26 Feb 2013, femi anjorin wrote: > Hi, > > I had an healthy ceph cluster until this morn.Thesse are the sequence of events: > > 1. I deleted existing pools - data,metadata and rbd > 2. I recreated the pools (data and metadata) with a different pg num > 3. I changed the replication level on data pool to 3. > 4. I realised it started scrubing. I wait for a while until it > finished and check the health ...it was ok. > 5. After a few minutes i wanted to be sure the cluster was ok. Then I > stop the ceph service and restarted it. > 6. it started scrubbing again for a while and then it reports > HEALTH_WARN mds b is laggy....mdsmap laggy or crashed . > 7. Since i had 2 mds. i tried to test to know if the laggy issue is > peculiar to the node with mds.b ...so i stop the service and restarted > it with only 1 mds.a on a different node. I realise after scrubbing > that mds.a also becomes laggy or crashed. HEALTH_WARN. If you delete and recreate the data/metadata pools you also need to run the newfs to reset the mdsmap to match. (It references the pools by numeric id and not name.) Once you run that command and restart the mons you should be in good shape. ceph mds newfs <metadata pool id> <data pool id> --yes-i-really-mean-it This will blow away any fs contents, but you already did that when you deleted the old pools. sage > > I now remember that this is the second time i will experience issue > with mds becoming laggy or crashed after recreating a new pool. > > Questions: > > 1. After creating a new data pool and metadata pool with new pg > numbers, is there any necessary command to issue before using the > cluster again? > 2. when the ceph health is not ok ..example if mds is laggy should > ceph-fuse have issues? Issue like - difficulty in accessing the mount > point.. > > > Regards, > Femi. > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com