Re: Data replication not working

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Henry,

Where can I find out about the semantics of the different
fields in different maps ? Is there any formal documentation ?

Pl suggest.

--ajit

On Fri, 2011-04-29 at 10:56 +0800, Henry Chang wrote:
> Ahh... After checking your ceph.conf, I think I see the problem. The
> osd ids in your ceph.conf are 1,2,3 but in your crushmap, they are
> 0,1,2. That's why the data never go to osd3.
> 
> 2011/4/29 Henry Chang <henry.cy.chang@xxxxxxxxx>:
> > Hmm...... The crushmap looks fine to me. Could you have a pg dump (by
> > ceph pg dump -o -) and check the pg membership to see if the third osd
> > is responsible for some pgs? And as Greg asked, it would be more
> > helpful if you could provide more info/description about this problem.
> >
> > Thanks,
> > Henry
> >
> > 2011/4/28 Ajit K Jena <ajit@xxxxxxxxxx>:
> >> Hi Henry,
> >>
> >> Thanks for your quick reply. I believe our crushmap is
> >> updated correctly to take care of osd node failures. I
> >> am attaching the text version of the crushmap with this
> >> mail. Pl help us find the mistake if any.
> >>
> >> Regards.
> >>
> >> --ajit
> >>
> >>
> >> On Thu, 2011-04-28 at 17:10 +0800, Henry Chang wrote:
> >>> 2011/4/28 Ajit K Jena <ajit@xxxxxxxxxx>:
> >>> > Hi All,
> >>> >
> >>> > We have been experimenting with ceph since version 0.24 and
> >>> > found one important issue that is not resolved (or may be
> >>> > we don't know how to configure properly).
> >>> >
> >>> > I will talk about 0.27 since that is the latest release:
> >>> >
> >>> >  We have a one mon, one mds, and two osd setup. The data
> >>> >  stored goes to both the osds. We introduce a third osd.
> >>> >  the ceph status shows "osd: 3 up, 3 in" but no data goes
> >>> >  into the third osd. This remains the case even if we
> >>> >  power off one of the original osds.
> >>> >
> >>> > Data reliability thru replication is one of the major goals
> >>> > of ceph. I am wondering as to what we might be missing to
> >>> > get this feature going. I am attaching the configuration
> >>> > file.
> >>> >
> >>> > Pl help us solve this problem.
> >>> >
> >>> > Regards.
> >>> >
> >>> > --ajit
> >>> >
> >>>
> >>> Hi Ajit,
> >>>
> >>> The symptom looks like the crushmap has not been updated after you
> >>> introduced the third osd, so the data never goes into it. If that is
> >>> the case, you may refer to the wiki page
> >>> (http://ceph.newdream.net/wiki/OSD_cluster_expansion/contraction#Include_the_new_OSD_in_the_data_placement)
> >>> to adjust the crushmap of your cluster.
> >>>
> >>> Cheers,
> >>> --
> >>> Henry
> >>> --
> >>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> >>> the body of a message to majordomo@xxxxxxxxxxxxxxx
> >>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >>
> >>
> >
> >
> >
> > --
> > Henry Chang
> >
> 
> 
> 


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux