Re: RedHat ceph boot question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 27 Jan 2014, Derek Yarnell wrote:
> Hi Sage,
> 
> Our clusters are slightly different but no the monitors start just fine.
>  On our test and rgw clusters we run monitors co-located with our OSDs.
>  The monitors start just fine.  My understanding is that when booting
> the hosts detect a disk hot-plug event in udev via the
> /lib/udev/rules.d/95-ceph-osd.rules file.  Running the
> '/usr/sbin/ceph-disk-activate /dev/vdb1' command for example in our test
> cluster does the right thing
> 
> # /usr/sbin/ceph-disk-activate /dev/vdb1
> === osd.2 ===
> create-or-move updated item name 'osd.2' weight 0.02 at location
> {host=ceph02,root=default} to crush map
> Starting Ceph osd.2 on ceph02...
> starting osd.2 at :/0 osd_data /var/lib/ceph/osd/ceph-2
> /var/lib/ceph/osd/ceph-2/journal
> 
> 
> Our best guess so far is that this line is not matching the underlying
> disk that is getting hotplugged (95-ceph-osd.rules).  Is
> ID_PART_ENTRY_TYPE just the partition UUID or are we not understanding
> identifier correctly?
> 
>  ENV{ID_PART_ENTRY_TYPE}=="4fbd7e29-9d25-41b8-afd0-062c0ceff05d",

These rules do not work with RHEL6 because it is an old version of udev 
and/or blkid (I forget the details now) that doesn't natively expose the 
GPT UUIDs.  There is a 95-ceph-osd-alt.rules file that should be installed 
that kludges around this by shelling out to the ceph-disk-udev helper.  
That's the helper to test in this case...

sage


> 
> Thanks,
> derek
> 
> On 1/27/14, 11:29 AM, Sage Weil wrote:
> > Hi Derek,
> > 
> > Would like to get to the bottom of your problem.  Is it that the monitors 
> > don't start after a reboot?  Is there an error in 
> > /var/log/ceph/ceph-mon.`hostname`.log?
> > 
> > sage
> > 
> > On Mon, 27 Jan 2014, Derek Yarnell wrote:
> > 
> >> Hi,
> >>
> >> Would I take this to understand that this may be a known issue with udev
> >> on RHEL then?  We will for now add them to the fstab.
> >>
> >> Thanks,
> >> derek
> >>
> >> On 1/25/14, 9:23 PM, Michael J. Kidd wrote:
> >>> While clearly not optimal for long term flexibility, I've found that
> >>> adding my OSD's to fstab allows the OSDs to mount during boot, and they
> >>> start automatically when they're already mounted during boot.
> >>>
> >>> Hope this helps until a permanent fix is available.
> >>>
> >>> Michael J. Kidd
> >>> Sr. Storage Consultant
> >>> Inktank Professional Services
> >>>
> >>>
> >>> On Fri, Jan 24, 2014 at 9:08 PM, Derek Yarnell <derek@xxxxxxxxxxxxxx
> >>> <mailto:derek@xxxxxxxxxxxxxx>> wrote:
> >>>
> >>>     So we have a test cluster, and two production clusters all running on
> >>>     RHEL6.5.  Two are running Emperor and one of them running Dumpling.  On
> >>>     all of them our OSDs do not start at boot it seems via the udev rules.
> >>>     The OSDs were created with ceph-deploy and are all GPT.  The OSDs are
> >>>     visable with `ceph-disk list` and running `/usr/sbin/ceph-disk-activate
> >>>     {device}` mounts and adds them.  Running a `partprobe {device}` does not
> >>>     seem to trigger the udev rule at all.
> >>>
> >>>     I had found this issue[1] but we are definitely running code that was
> >>>     released after this ticket was closed.  Has there been anyone else that
> >>>     has problems with udev on RHEL mounting their OSDs?
> >>>
> >>>     [1] - http://tracker.ceph.com/issues/5194
> >>>
> >>>     Thanks,
> >>>     derek
> >>>
> >>>     --
> >>>     Derek T. Yarnell
> >>>     University of Maryland
> >>>     Institute for Advanced Computer Studies
> >>>     _______________________________________________
> >>>     ceph-users mailing list
> >>>     ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
> >>>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>>
> >>>
> >>
> >> -- 
> >> Derek T. Yarnell
> >> University of Maryland
> >> Institute for Advanced Computer Studies
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@xxxxxxxxxxxxxx
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> >>
> > 
> 
> -- 
> Derek T. Yarnell
> University of Maryland
> Institute for Advanced Computer Studies
> 
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux