Re: ceph-volume does not support upstart

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,
I am sorry for the delay.
Thank you for your suggestion.

It is better to update system or keep using ceph-disk in fact. 
Thank you Alfredo Deza & Cary.


> 在 2018年1月8日,下午11:41,Alfredo Deza <adeza@xxxxxxxxxx> 写道:
> 
> ceph-volume relies on systemd, it will not work with upstart. Going
> the fstab way might work, but most of the lvm implementation will want
> to do systemd-related calls like enabling units and placing files.
> 
> For upstart you might want to keep using ceph-disk, unless upgrading
> to a newer OS is an option in which case ceph-volume would work (as
> long as systemd is available)
> 
> On Sat, Dec 30, 2017 at 9:11 PM, 赵赵贺东 <zhaohedong@xxxxxxxxx> wrote:
>> Hello Cary,
>> 
>> Thank you for your detailed description, it’s really helpful for me!
>> I will have a try when I get back to my office!
>> 
>> Thank you for your attention to this matter.
>> 
>> 
>> 在 2017年12月30日,上午3:51,Cary <dynamic.cary@xxxxxxxxx> 写道:
>> 
>> Hello,
>> 
>> I mount my Bluestore OSDs in /etc/fstab:
>> 
>> vi /etc/fstab
>> 
>> tmpfs   /var/lib/ceph/osd/ceph-12  tmpfs   rw,relatime 0 0
>> =====================================================
>> Then mount everyting in fstab with:
>> mount -a
>> ======================================================
>> I activate my OSDs this way on startup: You can find the fsid with
>> 
>> cat /var/lib/ceph/osd/ceph-12/fsid
>> 
>> Then add file named ceph.start so ceph-volume will be run at startup.
>> 
>> vi /etc/local.d/ceph.start
>> ceph-volume lvm activate 12 827f4a2c-8c1b-427b-bd6c-66d31a0468ac
>> ======================================================
>> Make it excitable:
>> chmod 700 /etc/local.d/ceph.start
>> ======================================================
>> cd /etc/local.d/
>> ./ceph.start
>> ======================================================
>> I am a Gentoo user and use OpenRC, so this may not apply to you.
>> ======================================================
>> cd /etc/init.d/
>> ln -s ceph ceph-osd.12
>> /etc/init.d/ceph-osd.12 start
>> rc-update add ceph-osd.12 default
>> 
>> Cary
>> 
>> On Fri, Dec 29, 2017 at 8:47 AM, 赵赵贺东 <zhaohedong@xxxxxxxxx> wrote:
>> 
>> Hello Cary!
>> It’s really big surprise for me to receive your reply!
>> Sincere thanks to you!
>> I know it’s a fake execute file, but it works!
>> 
>> ====================>
>> $ cat /usr/sbin/systemctl
>> #!/bin/bash
>> exit 0
>> <====================
>> 
>> I can start my osd by following command
>> /usr/bin/ceph-osd --cluster=ceph -i 12 -f --setuser ceph --setgroup ceph
>> 
>> But, threre are still problems.
>> 1.Though ceph-osd can start successfully, prepare log and activate log looks
>> like errors occurred.
>> 
>> Prepare log:
>> =======================================>
>> # ceph-volume lvm prepare --bluestore --data vggroup/lv
>> Running command: sudo mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-12
>> Running command: chown -R ceph:ceph /dev/dm-0
>> Running command: sudo ln -s /dev/vggroup/lv /var/lib/ceph/osd/ceph-12/block
>> Running command: sudo ceph --cluster ceph --name client.bootstrap-osd
>> --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o
>> /var/lib/ceph/osd/ceph-12/activate.monmap
>> stderr: got monmap epoch 1
>> Running command: ceph-authtool /var/lib/ceph/osd/ceph-12/keyring
>> --create-keyring --name osd.12 --add-key
>> AQAQ+UVa4z2ANRAAmmuAExQauFinuJuL6A56ww==
>> stdout: creating /var/lib/ceph/osd/ceph-12/keyring
>> stdout: added entity osd.12 auth auth(auid = 18446744073709551615
>> key=AQAQ+UVa4z2ANRAAmmuAExQauFinuJuL6A56ww== with 0 caps)
>> Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-12/keyring
>> Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-12/
>> Running command: sudo ceph-osd --cluster ceph --osd-objectstore bluestore
>> --mkfs -i 12 --monmap /var/lib/ceph/osd/ceph-12/activate.monmap --key
>> **************************************** --osd-data
>> /var/lib/ceph/osd/ceph-12/ --osd-uuid 827f4a2c-8c1b-427b-bd6c-66d31a0468ac
>> --setuser ceph --setgroup ceph
>> stderr: warning: unable to create /var/run/ceph: (13) Permission denied
>> stderr: 2017-12-29 08:13:08.609127 b66f3000 -1 asok(0x850c62a0)
>> AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to
>> bind the UNIX domain socket to '/var/run/ceph/ceph-osd.12.asok': (2) No such
>> file or directory
>> stderr:
>> stderr: 2017-12-29 08:13:08.643410 b66f3000 -1
>> bluestore(/var/lib/ceph/osd/ceph-12//block) _read_bdev_label unable to
>> decode label at offset 66: buffer::malformed_input: void
>> bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past
>> end of struct encoding
>> stderr: 2017-12-29 08:13:08.644055 b66f3000 -1
>> bluestore(/var/lib/ceph/osd/ceph-12//block) _read_bdev_label unable to
>> decode label at offset 66: buffer::malformed_input: void
>> bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past
>> end of struct encoding
>> stderr: 2017-12-29 08:13:08.644722 b66f3000 -1
>> bluestore(/var/lib/ceph/osd/ceph-12//block) _read_bdev_label unable to
>> decode label at offset 66: buffer::malformed_input: void
>> bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&) decode past
>> end of struct encoding
>> stderr: 2017-12-29 08:13:08.646722 b66f3000 -1
>> bluestore(/var/lib/ceph/osd/ceph-12/) _read_fsid unparsable uuid
>> stderr: 2017-12-29 08:14:00.697028 b66f3000 -1 key
>> AQAQ+UVa4z2ANRAAmmuAExQauFinuJuL6A56ww==
>> stderr: 2017-12-29 08:14:01.261659 b66f3000 -1 created object store
>> /var/lib/ceph/osd/ceph-12/ for osd.12 fsid
>> 4e5adad0-784c-41b4-ab72-5f4fae499b3a
>> <=======================================
>> 
>> Activate log:
>> =======================================>
>> # ceph-volume lvm activate --bluestore 12
>> 827f4a2c-8c1b-427b-bd6c-66d31a0468ac
>> Running command: sudo ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev
>> /dev/vggroup/lv --path /var/lib/ceph/osd/ceph-12
>> Running command: sudo ln -snf /dev/vggroup/lv
>> /var/lib/ceph/osd/ceph-12/block
>> Running command: chown -R ceph:ceph /dev/dm-0
>> Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-12
>> Running command: sudo systemctl enable
>> ceph-volume@lvm-12-827f4a2c-8c1b-427b-bd6c-66d31a0468ac
>> Running command: sudo systemctl start ceph-osd@12
>> <=======================================
>> 
>> After activate operation , osd did not start , but I can start osd by the
>> following command(before restart host)
>> /usr/bin/ceph-osd --cluster=ceph -i 12 -f --setuser ceph --setgroup ceph
>> 
>> 
>> 
>> 2. After host reboot, everything about ceph-osd has been lost.
>>   Because before I reboot, /var/lib/ceph/osd/ceph-12 was mounted on tmpfs,
>> so after reboot I lost everything about ceph-osd.
>> #df -h
>> /dev/root        15G  2.4G   12G  18% /
>> devtmpfs       1009M  4.0K 1009M   1% /dev
>> none            4.0K     0  4.0K   0% /sys/fs/cgroup
>> none            202M  156K  202M   1% /run
>> none            5.0M     0  5.0M   0% /run/lock
>> none           1009M     0 1009M   0% /run/shm
>> none            100M     0  100M   0% /run/user
>> tmpfs          1009M   48K 1009M   1% /var/lib/ceph/osd/ceph-12
>> 
>> 
>> 3. ceph-osd can not start automatically.
>> I think there are something wrong in osd upstart, I should add some
>> operation about osd upstart.
>> 
>> 
>> It seems that this ceph-volume for ubuntu14.04 is not a easy problem for me
>> , so any suggestions  or hints about the problems will be appreciated!
>> 
>> 
>> 
>> 
>> 在 2017年12月29日,下午2:06,Cary <dynamic.cary@xxxxxxxxx> 写道:
>> 
>> 
>> You could add a file named  /usr/sbin/systemctl and add:
>> exit 0
>> to it.
>> 
>> Cary
>> 
>> On Dec 28, 2017, at 18:45, 赵赵贺东 <zhaohedong@xxxxxxxxx> wrote:
>> 
>> 
>> Hello ceph-users!
>> 
>> I am a ceph user from china.
>> Our company deploy ceph on arm ubuntu 14.04.
>> Ceph Version is luminous 12.2.2.
>> When I try to activate osd by ceph-volume, I got the following error.(osd
>> prepare stage seems work normally)
>> It seems that ceph-volume only work under systemd, but ubuntu 14.04 does not
>> support systemd.
>> How can I deploy osd in ubuntu 14.04 by ceph-volume?
>> Will ceph-volume support upstart in the future?
>> 
>> ===============================================================>
>> # ceph-volume lvm activate --bluestore 12
>> 03fa2757-412d-4892-af8a-f2260294a2dc
>> Running command: sudo ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev
>> /dev/vggroup/lvdata --path /var/lib/ceph/osd/ceph-12
>> Running command: sudo ln -snf /dev/vggroup/lvdata
>> /var/lib/ceph/osd/ceph-12/block
>> Running command: chown -R ceph:ceph /dev/dm-2
>> Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-12
>> Running command: sudo systemctl enable
>> ceph-volume@lvm-12-03fa2757-412d-4892-af8a-f2260294a2dc
>> stderr: sudo: systemctl: command not found
>> -->  RuntimeError: command returned non-zero exit status: 1
>> <================================================================
>> 
>> 
>> Your reply will be appreciated!
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
>> 
>> 
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux