Re: Data replication not working

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Henry,

Thanks for pointing out. It started working. The corrected crushmap
is attached. May be it will help someone else.

We will try out some variations in the crushmap and see how it changes
things.

Thanks a lot !!

--ajit

On Fri, 2011-04-29 at 10:56 +0800, Henry Chang wrote:
> Ahh... After checking your ceph.conf, I think I see the problem. The
> osd ids in your ceph.conf are 1,2,3 but in your crushmap, they are
> 0,1,2. That's why the data never go to osd3.
> 
> 2011/4/29 Henry Chang <henry.cy.chang@xxxxxxxxx>:
> > Hmm...... The crushmap looks fine to me. Could you have a pg dump (by
> > ceph pg dump -o -) and check the pg membership to see if the third osd
> > is responsible for some pgs? And as Greg asked, it would be more
> > helpful if you could provide more info/description about this problem.
> >
> > Thanks,
> > Henry
> >
> > 2011/4/28 Ajit K Jena <ajit@xxxxxxxxxx>:
> >> Hi Henry,
> >>
> >> Thanks for your quick reply. I believe our crushmap is
> >> updated correctly to take care of osd node failures. I
> >> am attaching the text version of the crushmap with this
> >> mail. Pl help us find the mistake if any.
> >>
> >> Regards.
> >>
> >> --ajit
> >>
> >>
> >> On Thu, 2011-04-28 at 17:10 +0800, Henry Chang wrote:
> >>> 2011/4/28 Ajit K Jena <ajit@xxxxxxxxxx>:
> >>> > Hi All,
> >>> >
> >>> > We have been experimenting with ceph since version 0.24 and
> >>> > found one important issue that is not resolved (or may be
> >>> > we don't know how to configure properly).
> >>> >
> >>> > I will talk about 0.27 since that is the latest release:
> >>> >
> >>> >  We have a one mon, one mds, and two osd setup. The data
> >>> >  stored goes to both the osds. We introduce a third osd.
> >>> >  the ceph status shows "osd: 3 up, 3 in" but no data goes
> >>> >  into the third osd. This remains the case even if we
> >>> >  power off one of the original osds.
> >>> >
> >>> > Data reliability thru replication is one of the major goals
> >>> > of ceph. I am wondering as to what we might be missing to
> >>> > get this feature going. I am attaching the configuration
> >>> > file.
> >>> >
> >>> > Pl help us solve this problem.
> >>> >
> >>> > Regards.
> >>> >
> >>> > --ajit
> >>> >
> >>>
> >>> Hi Ajit,
> >>>
> >>> The symptom looks like the crushmap has not been updated after you
> >>> introduced the third osd, so the data never goes into it. If that is
> >>> the case, you may refer to the wiki page
> >>> (http://ceph.newdream.net/wiki/OSD_cluster_expansion/contraction#Include_the_new_OSD_in_the_data_placement)
> >>> to adjust the crushmap of your cluster.
> >>>
> >>> Cheers,
> >>> --
> >>> Henry
> >>> --
> >>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> >>> the body of a message to majordomo@xxxxxxxxxxxxxxx
> >>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >>
> >>
> >
> >
> >
> > --
> > Henry Chang
> >
> 
> 
> 

# begin crush map

# devices
device 1 device0
device 2 device1
device 3 device2

# types
type 0 device
type 1 domain
type 2 pool

# buckets
domain root {
	id -1		# do not change unnecessarily
	alg straw
	hash 0	# rjenkins1
	item device0 weight 1.000
	item device1 weight 1.000
	item device2 weight 1.000
}

# rules
rule data {
	ruleset 0
	type replicated
	min_size 2
	max_size 3 
	step take root
	step choose firstn 0 type device
	step emit
}
rule metadata {
	ruleset 1
	type replicated
	min_size 1
	max_size 10
	step take root
	step choose firstn 0 type device
	step emit
}
rule casdata {
	ruleset 2
	type replicated
	min_size 1
	max_size 10
	step take root
	step choose firstn 0 type device
	step emit
}
rule rbd {
	ruleset 3
	type replicated
	min_size 1
	max_size 10
	step take root
	step choose firstn 0 type device
	step emit
}

# end crush map

[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux