1.5.39 can be installed from “luminous” repo:
http://docs.ceph.com/docs/master/install/get-packages/
From: Hüseyin Atatür YILDIRIM <hyildirim@xxxxxxxxxxxxxxx>
Sent: Friday, January 5, 2018 9:16:48 AM To: Sergey Malinin; ceph-users@xxxxxxxxxxxxxx Subject: RE: "ceph -s" shows no osds Thanks a lot Sergey. I searched about the upgrade of ceph-deploy and found out “pip install” is the most reasonable one; normal software repo install (i.e. sudo apt install ceph-deploy) always installs version 1.32. Do you agree with this ?
Regards, Atatür From: Sergey Malinin [mailto:hell@xxxxxxxxxxx]
Mgr installation was introduced in 1.5.38, you need to upgrade ceph-deploy.
From: Hüseyin Atatür YILDIRIM <hyildirim@xxxxxxxxxxxxxxx>
Hi,
ceph-deploy --version 1.5.32
Thank you, Atatür From: Sergey Malinin [mailto:hell@xxxxxxxxxxx]
What is you “ceph-deploy --version” ?
From: Hüseyin Atatür YILDIRIM <hyildirim@xxxxxxxxxxxxxxx>
Hello Sergey,
I issued the mgr create command and it fails with
ceph-deploy mgr create mon01 usage: ceph-deploy [-h] [-v | -q] [--version] [--username USERNAME] [--overwrite-conf] [--cluster NAME] [--ceph-conf CEPH_CONF] COMMAND ... ceph-deploy: error: argument COMMAND: invalid choice: 'mgr' (choose from 'new', 'install', 'rgw', 'mon', 'mds', 'gatherkeys', 'disk', 'osd', 'admin', 'repo', 'config', 'uninstall', 'purge', 'purgedata', 'calamari', 'forgetkeys', 'pkg')
Any ideas?
Thank you..
From: Sergey Malinin [mailto:hell@xxxxxxxxxxx]
What version are you using? Luminous needs mgr daemons running.
From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of Hüseyin Atatür YILDIRIM <hyildirim@xxxxxxxxxxxxxxx>
Hello,
I try to setup Ceph cluster on Ubuntu 16.04. I’ve setup 1 monitor-osd (hostname mon01) and 2 osd hosts (osd01 and osd02). At one stage, I issued
ceph-deploy osd create mon01:sdb1 osd01:sdb1 osd02:sdb1
and ran successfully. But when I issued below from the admin host:
ssh mon01 sudo ceph –s
cluster 9c7303db-56ab-4ddf-9fb8-1882754a4411 health HEALTH_ERR 64 pgs are stuck inactive for more than 300 seconds 64 pgs stuck inactive 64 pgs stuck unclean no osds monmap e1: 1 mons at {mon01=192.168.122.158:6789/0} election epoch 4, quorum 0 mon01 osdmap e1: 0 osds: 0 up, 0 in flags sortbitwise,require_jewel_osds pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects 0 kB used, 0 kB / 0 kB avail 64 creating
There’s no osds in the cluster. Can you please help?
Regards, Atatur
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com