2011/4/28 Ajit K Jena <ajit@xxxxxxxxxx>: > Hi All, > > We have been experimenting with ceph since version 0.24 and > found one important issue that is not resolved (or may be > we don't know how to configure properly). > > I will talk about 0.27 since that is the latest release: > > ÂWe have a one mon, one mds, and two osd setup. The data > Âstored goes to both the osds. We introduce a third osd. > Âthe ceph status shows "osd: 3 up, 3 in" but no data goes > Âinto the third osd. This remains the case even if we > Âpower off one of the original osds. > > Data reliability thru replication is one of the major goals > of ceph. I am wondering as to what we might be missing to > get this feature going. I am attaching the configuration > file. > > Pl help us solve this problem. > > Regards. > > --ajit > Hi Ajit, The symptom looks like the crushmap has not been updated after you introduced the third osd, so the data never goes into it. If that is the case, you may refer to the wiki page (http://ceph.newdream.net/wiki/OSD_cluster_expansion/contraction#Include_the_new_OSD_in_the_data_placement) to adjust the crushmap of your cluster. Cheers, -- Henry -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html