RE: [PATCH]init-ceph: Adding do_cmd_okfail when calling ceph-crush-location

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 30 Jul 2014, Xue, Chendi wrote:
> Hi, Sage
> 
> Yes, this happened when I use the -a switch.
> 
> If I use do_cmd instead of do_cmd_okfail to call the location hook here, 
> is it being more suitable?

I think that'll work, yeah!

> I think there are still a bunch of people using "/etc/init.d/ceph -a 
> start" to start ceph deamon, so why leave this small problem in 
> init-ceph when "-a" is still supported?

I think it's fine.  The main issue is that the -a mode only works for 
clusters deployed the 'old' way (not with ceph-deploy or chef or puppet).

Anyway, I'll take a patch with do_cmd (provided you tested it and it works 
for you).  :)

sage


> 
> 
> Best Regards,
> -Chendi
> 
> -----Original Message-----
> From: ceph-devel-owner@xxxxxxxxxxxxxxx [mailto:ceph-devel-owner@xxxxxxxxxxxxxxx] On Behalf Of Sage Weil
> Sent: Wednesday, July 30, 2014 9:03 AM
> To: Xue, Chendi
> Cc: ceph-devel@xxxxxxxxxxxxxxx
> Subject: Re: [PATCH]init-ceph: Adding do_cmd_okfail when calling ceph-crush-location
> 
> Hi,
> 
> On Wed, 30 Jul 2014, Xue, Chendi wrote:
> > >From 797dbf4f027200bb22ba89d2f616e254321ec023 Mon Sep 17 00:00:00 
> > >2001
> > From: Chendi Xue <chendi.xue@xxxxxxxxx>
> > Date: Tue, 29 Jul 2014 16:45:45 +0800
> > Subject: [PATCH 8/8] Adding do_cmd_okfail when calling 
> > ceph-crush-location
> > 
> > Original init-ceph doesn't use do_cmd_okfail when getting 
> > osd_location, which results in osds wrong mapping( verified by using 
> > "ceph osd tree" )
> 
> Is this when you are using the -a switch, or when you are starting daemons on the local host?  The -a stuff isn't well supported any more (and I've half a mind to kill support for it entirely if the world weren't going toward systemd anyway).
> 
> Anyway, I'm worried about what the _okfail variant will change the behavior here if the location hook script exits with an error for some reason...
> 
> sage
> 
> 
> > 
> > Signed-off-by: Chendi Xue <chendi.xue@xxxxxxxxx>
> > ---
> > src/init-ceph.in | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/src/init-ceph.in b/src/init-ceph.in index 
> > 86eb87f..09c7cbc 100644
> > --- a/src/init-ceph.in
> > +++ b/src/init-ceph.in
> > @@ -327,7 +327,7 @@ for name in $what; do
> >                                if [ "${update_crush:-1}" = "1" -o 
> > "${update_crush:-1}" = "true" ]; then
> >                                    # update location in crush
> >                                    get_conf osd_location_hook "$BINDIR/ceph-crush-location" "osd crush location hook"
> > -                                  osd_location=`$osd_location_hook 
> > --cluster ceph --id $id --type osd`
> > +                                 osd_location=$(do_cmd_okfail 
> > +"$osd_location_hook --cluster ceph --id $id --type osd")
> >                                    get_conf osd_weight "" "osd crush initial weight"
> >                                    defaultweight="$(df -P -k $osd_data/. | tail -1 | awk '{ print sprintf("%.2f",$2/1073741824) }')"
> >                                    get_conf osd_keyring "$osd_data/keyring" "keyring"
> > --
> > 1.9.1
> > 
> > 
> > Best Regards,
> > -Chendi
> > 
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> > in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo 
> > info at  http://vger.kernel.org/majordomo-info.html
> > 
> > 
> 
> 

[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux