How to create multiple OSD's per host?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2014-08-15 7:56 GMT+08:00 Bruce McFarland <Bruce.McFarland at taec.toshiba.com>
:

>  This is an example of the output from ?ceph-deploy osd create [data]
> [journal?
>
> I?ve noticed that all of the ?ceph-conf? commands use the same parameter
> of ??name=osd.?  Everytime ceph-deploy is called. I end up with 30 osd?s ?
> 29 in the prepared and 1 active according to the ?ceph-disk list? output
> and only 1 osd that has a xfs mount point. I?ve tried both with all
> data/journal devices on the same ceph-deploy command line and issuing 1
> ceph-deploy cmd for each OSD data/journal pair (easier to script).
>
>
>
>
>
> + ceph-deploy osd create ceph0:/dev/sdl:/dev/md0p17
>
> [ceph_deploy.conf][DEBUG ] found configuration file at:
> /root/.cephdeploy.conf
>
> [ceph_deploy.cli][INFO  ] Invoked (1.5.10): /usr/bin/ceph-deploy osd
> create ceph0:/dev/sdl:/dev/md0p17
>
> [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks
> ceph0:/dev/sdl:/dev/md0p17
>
> [ceph0][DEBUG ] connected to host: ceph0
>
> [ceph0][DEBUG ] detect platform information from remote host
>
> [ceph0][DEBUG ] detect machine type
>
> [ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final
>
> [ceph_deploy.osd][DEBUG ] Deploying osd to ceph0
>
> [ceph0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
>
> [ceph0][INFO  ] Running command: udevadm trigger --subsystem-match=block
> --action=add
>
> [ceph_deploy.osd][DEBUG ] Preparing host ceph0 disk /dev/sdl journal
> /dev/md0p17 activate True
>
> [ceph0][INFO  ] Running command: ceph-disk -v prepare --fs-type xfs
> --cluster ceph -- /dev/sdl /dev/md0p17
>
> [ceph0][DEBUG ] Information: Moved requested sector from 34 to 2048 in
>
> [ceph0][DEBUG ] order to align on 2048-sector boundaries.
>
> [ceph0][DEBUG ] The operation has completed successfully.
>
> [ceph0][DEBUG ] meta-data=/dev/sdl1              isize=2048   agcount=4,
> agsize=244188597 blks
>
> [ceph0][DEBUG ]          =                       sectsz=512   attr=2,
> projid32bit=0
>
> [ceph0][DEBUG ] data     =                       bsize=4096
> blocks=976754385, imaxpct=5
>
> [ceph0][DEBUG ]          =                       sunit=0      swidth=0 blks
>
> [ceph0][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0
>
> [ceph0][DEBUG ] log      =internal log           bsize=4096
> blocks=476930, version=2
>
> [ceph0][DEBUG ]          =                       sectsz=512   sunit=0
> blks, lazy-count=1
>
> [ceph0][DEBUG ] realtime =none                   extsz=4096   blocks=0,
> rtextents=0
>
> [ceph0][DEBUG ] The operation has completed successfully.
>
> [ceph0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd
> --cluster=ceph --show-config-value=fsid
>
> [ceph0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
> --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
>
> [ceph0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
> --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
>
> [ceph0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
> --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
>
> [ceph0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
> --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
>
> [ceph0][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd
> --cluster=ceph --show-config-value=osd_journal_size
>
> [ceph0][WARNIN] DEBUG:ceph-disk:Journal /dev/md0p17 is a partition
>
> [ceph0][WARNIN] WARNING:ceph-disk:OSD will not be hot-swappable if journal
> is not the same device as the osd data
>
> [ceph0][WARNIN] DEBUG:ceph-disk:Creating osd partition on /dev/sdl
>
> [ceph0][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk
> --largest-new=1 --change-name=1:ceph data
> --partition-guid=1:a96b4af4-11f4-4257-9476-64a6e4c93c28
> --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sdl
>
> [ceph0][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sdl
>
> [ceph0][WARNIN] INFO:ceph-disk:Running command: /sbin/udevadm settle
>
> [ceph0][WARNIN] DEBUG:ceph-disk:Creating xfs fs on /dev/sdl1
>
> [ceph0][WARNIN] INFO:ceph-disk:Running command: /sbin/mkfs -t xfs -f -i
> size=2048 -- /dev/sdl1
>
> [ceph0][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdl1 on
> /var/lib/ceph/tmp/mnt.8xAu31 with options noatime
>
> [ceph0][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o
> noatime -- /dev/sdl1 /var/lib/ceph/tmp/mnt.8xAu31
>
> [ceph0][WARNIN] DEBUG:ceph-disk:Preparing osd data dir
> /var/lib/ceph/tmp/mnt.8xAu31
>
> [ceph0][WARNIN] DEBUG:ceph-disk:Creating symlink
> /var/lib/ceph/tmp/mnt.8xAu31/journal -> /dev/md0p17
>
> [ceph0][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.8xAu31
>
> [ceph0][WARNIN] INFO:ceph-disk:Running command: /bin/umount --
> /var/lib/ceph/tmp/mnt.8xAu31
>
> [ceph0][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/sgdisk
> --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdl
>
> [ceph0][WARNIN] INFO:ceph-disk:calling partx on prepared device /dev/sdl
>
> [ceph0][WARNIN] INFO:ceph-disk:re-reading known partitions will display
> errors
>
> [ceph0][WARNIN] INFO:ceph-disk:Running command: /sbin/partx -a /dev/sdl
>
> [ceph0][WARNIN] BLKPG: Device or resource busy
>
> [ceph0][WARNIN] error adding partition 1
>
> [ceph0][INFO  ] Running command: udevadm trigger --subsystem-match=block
> --action=add
>
> [ceph0][INFO  ] checking OSD status...
>
> [ceph0][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json
>
> [ceph_deploy.osd][DEBUG ] Host ceph0 is now ready for osd use.
>
>
>
> *From:* Bruce McFarland
> *Sent:* Thursday, August 14, 2014 11:45 AM
> *To:* 'ceph-users at ceph.com'
> *Subject:* How to create multiple OSD's per host?
>
>
>
> I?ve tried using ceph-deploy but it wants to assign the same id for each
> osd and I end up with a bunch of ?prepared? ceph-disk?s and only 1
> ?active?. If I use the manual ?short form? method the activate step fails
> and there are no xfs mount points on the ceph-disks. If I use the manual
> ?long form? it seems like I?m the closest to getting active
> ceph-disks/osd?s but the monitor always shows the osds as ?down/in? and the
> ceph-disks don?t persist over a boot cycle.
>
>
>
> Is there a document anywhere that anyone knows of that explains a step by
> step process for bringing up multiple osd?s per host ? 1 hdd with ssd
> journal partition per osd?
>
> Thanks,
>
> Bruce
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
Hi Bruce,

I depolyed 3 OSDs per host a few days ago and it worked pretty well.
Examined your log and guess you should try:

        ceph-deploy osd prepare ceph0:/dev/sdl:/dev/md0p17
        ceph-deploy osd activate ceph0:/dev/sdl1:/dev/md0p17

Or you could create a partition on /dev/sdl by yourself and

        ceph-deploy osd create ceph0:/dev/sdl1:/dev/md0p17

Hope this works.

Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140815/e9820db3/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux