Re: ceph-disk from jewel has issues on redhat 7

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the steps that should be enough to test it out, I hope you got the latest ceph-deploy either from pip or throught github.

On Tue, Mar 15, 2016 at 12:29 PM, Stephen Lord <Steve.Lord@xxxxxxxxxxx> wrote:
I would have to nuke my cluster right now, and I do not have a spare one..

The procedure though is literally this, given a 3 node redhat 7.2 cluster, ceph00, ceph01 and ceph02

ceph-deploy install --testing ceph00 ceph01 ceph02
ceph-deploy new ceph00 ceph01 ceph02

ceph-deploy mon create  ceph00 ceph01 ceph02
ceph-deploy gatherkeys  ceph00

ceph-deploy osd create ceph00:sdb:/dev/sdi
ceph-deploy osd create ceph00:sdc:/dev/sdi

All devices have their partition tables wiped before this. They are all just SATA devices, no special devices in the way.

sdi is an ssd and it is being carved up for journals. The first osd create works, the second one gets stuck in a loop in the update_partition call in ceph_disk for the 5 iterations before it gives up. When I look in /sys/block/sdi the partition for the first osd is visible, the one for the second is not. However looking at /proc/partitions it sees the correct thing. So something about partprobe is not kicking udev into doing the right thing when the second partition is added I suspect.

If I do not use the separate journal device then it usually works, but occasionally I see a single retry in that same loop.

There is code in ceph_deploy which uses partprobe or partx depending on which distro it detects, that is how I worked out what to change here.

If I have to tear things down again I will reproduce and post here.

Steve

> On Mar 15, 2016, at 2:12 PM, Vasu Kulkarni <vakulkar@xxxxxxxxxx> wrote:
>
> Do you mind giving the full failed logs somewhere in fpaste.org along with some os version details?
>  There are some known issues on RHEL,  If you use 'osd prepare' and 'osd activate'(specifying just the journal partition here) it might work better.
>
> On Tue, Mar 15, 2016 at 12:05 PM, Stephen Lord <Steve.Lord@xxxxxxxxxxx> wrote:
> Not multipath if you mean using the multipath driver, just trying to setup OSDs which use a data disk and a journal ssd. If I run just a disk based OSD and only specify one device to ceph-deploy then it usually works although sometimes has to retry. In the case where I am using it to carve an SSD into several partitions for journals it fails on the second one.
>
> Steve
>


----------------------------------------------------------------------
The information contained in this transmission may be confidential. Any disclosure, copying, or further distribution of confidential information is not permitted unless such privilege is explicitly granted in writing by Quantum. Quantum reserves the right to have electronic communications, including email and attachments, sent across its networks filtered through anti virus and spam software programs and retain such messages in order to comply with applicable data security and retention requirements. Quantum is not responsible for the proper and complete transmission of the substance of this communication or for any delay in its receipt.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux