Re: ceph-deploy: osd creating hung with one ssd disk as shared journal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Michael,
you are right, my system is installed on disk sdc, and sda is the journal disk to be shared.
This is the output of  partx -v /dev/sda, didn't see anything unusual:
device /dev/sda: start 0 size 117231408
gpt: 2 slices
# 1:      2048-  2099199 (  2097152 sectors,   1073 MB)
# 2:   2099200-  4196351 (  2097152 sectors,   1073 MB)
dos: 0 slices


2013/11/13 Michael <michael@xxxxxxxxxxxxxxxxxx>
Sorry, just spotted you're mounting on sdc. Can you chuck out a partx -v /dev/sda to see if there's anything odd about the data currently on there?

-Michael


On 12/11/2013 18:22, Michael wrote:
As long as there's room on the SSD for the partitioner it'll just use the conf value for osd journal size to section it up as it adds OSD's (I generally use the "ceph-deploy osd create srv:data:journal e.g. srv-12:/dev/sdb:/dev/sde" format when adding disks).
Does it being /dev/sda mean you're putting your journal onto an already partitioned and in use by the OS SSD?

-Michael

On 12/11/2013 18:09, Gruher, Joseph R wrote:

I didn’t think you could specify the journal in this manner (just pointing multiple OSDs on the same host all to journal /dev/sda).  Don’t you either need to partition the SSD and point each SSD to a separate partition, or format and mount the SSD and each OSD will use a unique file on the mount?  I’ve always created a separate partition on the SSD for each journal.

 

Preparing cluster ceph disks ceph0:/dev/sdb:/dev/sda ceph0:/dev/sdd:/dev/sda ceph0:/dev/sde:/dev/sda ceph0:/dev/sdf:/dev/sda ceph0:/dev/sdg:/dev/sda ceph0:/dev/sdh:/dev/sda

 

From: ceph-users-bounces@xxxxxxxxxxxxxx [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Tim Zhang
Sent: Tuesday, November 12, 2013 2:20 AM
To: ceph-users@xxxxxxxxxxxxxx
Subject: ceph-deploy: osd creating hung with one ssd disk as shared journal

 

Hi guys,

I use ceph-deploy to manage my cluster, but I get failed while creating the OSD, the process seems to hang up at creating first osd. By the way, SELinux is disabled, and my ceph-disk is patched according to the page:http://www.spinics.net/lists/ceph-users/msg03258.html

can you guys give me some advise?

(1) the output of ceph-deploy is:

Invoked (1.3.1): /usr/bin/ceph-deploy osd create ceph0:sdb:sda ceph0:sdd:sda ceph0:sde:sda ceph0:sdf:sda ceph0:sdg:sda ceph0:sdh:sda ceph1:sdb:sda ceph1:sdd:sda ceph1:sde:sda ceph1:sdf:sda ceph1:sdg:sda ceph1:sdh:sda ceph2:sdb:sda ceph2:sdd:sda ceph2:sde:sda ceph2:sdf:sda ceph2:sdg:sda ceph2:sdh:sda

[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph0:/dev/sdb:/dev/sda ceph0:/dev/sdd:/dev/sda ceph0:/dev/sde:/dev/sda ceph0:/dev/sdf:/dev/sda ceph0:/dev/sdg:/dev/sda ceph0:/dev/sdh:/dev/sda ceph1:/dev/sdb:/dev/sda ceph1:/dev/sdd:/dev/sda ceph1:/dev/sde:/dev/sda ceph1:/dev/sdf:/dev/sda ceph1:/dev/sdg:/dev/sda ceph1:/dev/sdh:/dev/sda ceph2:/dev/sdb:/dev/sda ceph2:/dev/sdd:/dev/sda ceph2:/dev/sde:/dev/sda ceph2:/dev/sdf:/dev/sda ceph2:/dev/sdg:/dev/sda ceph2:/dev/sdh:/dev/sda

[ceph0][DEBUG ] connected to host: ceph0 

[ceph0][DEBUG ] detect platform information from remote host

[ceph0][DEBUG ] detect machine type

[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final

[ceph_deploy.osd][DEBUG ] Deploying osd to ceph0

[ceph0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[ceph0][INFO  ] Running command: udevadm trigger --subsystem-match=block --action="">

[ceph_deploy.osd][DEBUG ] Preparing host ceph0 disk /dev/sdb journal /dev/sda activate True

[ceph0][INFO  ] Running command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sdb /dev/sda

[ceph0][ERROR ] WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data

[ceph0][ERROR ] Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda (Device or resource busy).  As a result, it may not reflect all of your changes until after reboot.

[ceph0][ERROR ] BLKPG: Device or resource busy

[ceph0][ERROR ] error adding partition 1

[ceph0][DEBUG ] The operation has completed successfully.

[ceph0][DEBUG ] The operation has completed successfully.

[ceph0][DEBUG ] meta-data=""              isize=2048   agcount=4, agsize=61047597 blks

[ceph0][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=0

[ceph0][DEBUG ] data     =                       bsize=4096   blocks=244190385, imaxpct=25

[ceph0][DEBUG ]          =                       sunit=0      swidth=0 blks

[ceph0][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0

[ceph0][DEBUG ] log      =internal log           bsize=4096   blocks=119233, version=2

[ceph0][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1

[ceph0][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0

[ceph0][DEBUG ] The operation has completed successfully.

[ceph0][INFO  ] Running command: udevadm trigger --subsystem-match=block --action="">

[ceph_deploy.osd][DEBUG ] Host ceph0 is now ready for osd use.

[ceph0][DEBUG ] connected to host: ceph0 

[ceph0][DEBUG ] detect platform information from remote host

[ceph0][DEBUG ] detect machine type

[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final

[ceph_deploy.osd][DEBUG ] Preparing host ceph0 disk /dev/sdd journal /dev/sda activate True

[ceph0][INFO  ] Running command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sdd /dev/sda

[ceph0][ERROR ] WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data

 

2 the mount system for that osd shows:

[root@host ~]# mount -l

/dev/sdc1 on / type ext4 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

tmpfs on /dev/shm type tmpfs (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

/dev/sdb1 on /var/lib/ceph/tmp/mnt.6D02EM type xfs (rw,noatime)

 

3 my testbed information is:

os: centos 6.4 Final

ceph: dumpling 67.4

three hosts: ceph0 ceph1 ceph2

each host have 3 disk sharing one ssd disk as journal

 

4 my ceph config is as this:

osd journal size = 9500

;osd mkfs type = xfs

;auth supported = none

auth_cluster_required = none

auth_service_required = none

auth_client_required = none

public_network = 172.18.11.0/24

cluster_network = 10.10.11.0/24

osd pool default size = 3

ms nocrc = true

osd op threads = 4

filestore op threads = 0

mon sync fs threshold = 0

osd pool default pg num = 100

osd pool default pgp num = 100

 

5 the output of ceph0 running command: pe -ef|grep ceph

[root@ceph0 ~]# ps -ef|grep ceph

root     13922     1  0 05:59 ?        00:00:00 /bin/sh /usr/sbin/ceph-disk-udev 1 sdb1 sdb

root     14059 13922  0 05:59 ?        00:00:00 python /usr/sbin/ceph-disk -v activate /dev/sdb1

root     14090     1  0 05:59 ?        00:00:00 /bin/sh /usr/sbin/ceph-disk-udev 1 sda1 sda

root     14107 14090  0 05:59 ?        00:00:00 python /usr/sbin/ceph-disk -v activate-journal /dev/sda1

root     14220 14059  0 05:59 ?        00:00:00 /usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 1 --monmap /var/lib/ceph/tmp/mnt.6D02EM/activate.monmap --osd-data /var/lib/ceph/tmp/mnt.6D02EM --osd-journal /var/lib/ceph/tmp/mnt.6D02EM/journal --osd-uuid 68381525-92ae-4e35-bf7f-aa34e31f2fae --keyring /var/lib/ceph/tmp/mnt.6D02EM/keyring

root     14280     1  0 06:02 ?        00:00:00 /bin/sh /usr/sbin/ceph-disk-udev 1 sdb1 sdb

root     14305 14280  0 06:02 ?        00:00:00 python /usr/sbin/ceph-disk -v activate /dev/sdb1

root     14337     1  0 06:02 ?        00:00:00 /bin/sh /usr/sbin/ceph-disk-udev 1 sda1 sda

root     14344     1  0 06:02 ?        00:00:00 /bin/sh /usr/sbin/ceph-disk-udev 2 sda2 sda

root     14366 14337  0 06:02 ?        00:00:00 python /usr/sbin/ceph-disk -v activate-journal /dev/sda1

root     14371 14344  0 06:02 ?        00:00:00 python /usr/sbin/ceph-disk -v activate-journal /dev/sda2

root     14409     1  0 06:05 ?        00:00:00 /bin/sh /usr/sbin/ceph-disk-udev 1 sdb1 sdb

root     14430 14409  0 06:05 ?        00:00:00 python /usr/sbin/ceph-disk -v activate /dev/sdb1

root     14444     1  0 06:05 ?        00:00:00 /bin/sh /usr/sbin/ceph-disk-udev 1 sda1 sda

root     14457     1  0 06:05 ?        00:00:00 /bin/sh /usr/sbin/ceph-disk-udev 2 sda2 sda

root     14468 14444  0 06:05 ?        00:00:00 python /usr/sbin/ceph-disk -v activate-journal /dev/sda1

root     14483 14457  0 06:05 ?        00:00:00 python /usr/sbin/ceph-disk -v activate-journal /dev/sda2

root     14601 13254  0 06:18 pts/2    00:00:00 grep ceph



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux