Re: Ceph-Deploy error on 15/71 stage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jones,

Just to make things clear: are you so telling me that it is completely
impossible to have a ceph "volume" in non-dedicated devices, sharing space
with, for instance, the nodes swap, boot or main partition?

And so the only possible way to have a functioning ceph distributed
filesystem working would be by having in each node at least one disk
dedicated for the operational system and another, independent disk
dedicated to the ceph filesystem?

I don't think it's completely impossible, but it would require code changes in SES and DeepSea and that seems quite challenging.

But if you don't have to stick with SES/DeepSea and instead build your cluster manually, you could create a logical volume on your spare partition and deploy OSDs with ceph-volume lvm.

This could be something like this:

---cut here---

# create logical volume "osd4" on volume group "vg0"
ceph-2:~ # lvcreate -n osd4 -L 1G vg0
  Logical volume "osd4" created.


# prepare lvm for bluestore
ceph-2:~ # ceph-volume lvm prepare --bluestore --data vg0/osd4
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 3b9eaa0e-9a4a-49ec-9042-34ad19a59592
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-4
--> Absolute path not found for executable: restorecon
--> Ensure $PATH environment variable contains common executable locations
Running command: /bin/chown -h ceph:ceph /dev/vg0/osd4
Running command: /bin/chown -R ceph:ceph /dev/dm-4
Running command: /bin/ln -s /dev/vg0/osd4 /var/lib/ceph/osd/ceph-4/block
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-4/activate.monmap
 stderr: got monmap epoch 2
Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-4/keyring --create-keyring --name osd.4 --add-key AQD3j49bDzsFIBAAsXQjhbwqFQwt/Vqq9VOnsw==
 stdout: creating /var/lib/ceph/osd/ceph-4/keyring
added entity osd.4 auth auth(auid = 18446744073709551615 key=AQD3j49bDzsFIBAAsXQjhbwqFQwt/Vqq9VOnsw== with 0 caps)
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/keyring
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/
Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 4 --monmap /var/lib/ceph/osd/ceph-4/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-4/ --osd-uuid 3b9eaa0e-9a4a-49ec-9042-34ad19a59592 --setuser ceph --setgroup ceph
--> ceph-volume lvm prepare successful for: vg0/osd4


# activate lvm OSD
ceph-2:~ # ceph-volume lvm activate 4 3b9eaa0e-9a4a-49ec-9042-34ad19a59592
Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/vg0/osd4 --path /var/lib/ceph/osd/ceph-4 --no-mon-config
Running command: /bin/ln -snf /dev/vg0/osd4 /var/lib/ceph/osd/ceph-4/block
Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-4/block
Running command: /bin/chown -R ceph:ceph /dev/dm-4
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
Running command: /bin/systemctl enable ceph-volume@lvm-4-3b9eaa0e-9a4a-49ec-9042-34ad19a59592 stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-4-3b9eaa0e-9a4a-49ec-9042-34ad19a59592.service → /usr/lib/systemd/system/ceph-volume@.service.
Running command: /bin/systemctl enable --runtime ceph-osd@4
stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@4.service → /usr/lib/systemd/system/ceph-osd@.service.
Running command: /bin/systemctl start ceph-osd@4
--> ceph-volume lvm activate successful for osd ID: 4
---cut here---

Instead of running "prepare" and "activate" separately you can run "ceph-volume lvm create ...", this will execute both steps and launch the OSD.

This way you don't need further partitions, but you won't be able to use deepsea for automated deployment since SES doesn't support lvm based OSDs (yet).

So you should not give up, there is a way :-)

Note: because of a compatibility issue with python3 and ceph-volume you should use at least

ceph-2:~ # ceph --version
ceph version 13.2.1-106-g9a1fcb1b6a (9a1fcb1b6a6682c3323a38c52898a94e121f6c15) mimic (stable)

Hope this helps!

Regards,
Eugen


Zitat von Jones de Andrade <johannesrs@xxxxxxxxx>:

Hi Eugen.

Just tried everything again here by removing the /sda4 partitions and
letting it so that either salt-run proposal-populate or salt-run state.orch
ceph.stage.configure could try to find the free space on the partitions to
work with: unsuccessfully again. :(

Just to make things clear: are you so telling me that it is completely
impossible to have a ceph "volume" in non-dedicated devices, sharing space
with, for instance, the nodes swap, boot or main partition?

And so the only possible way to have a functioning ceph distributed
filesystem working would be by having in each node at least one disk
dedicated for the operational system and another, independent disk
dedicated to the ceph filesystem?

That would be a awful drawback in our plans if real, but if there is no
other way, we will have to just give up. Just, please, answer this two
questions clearly, before we capitulate?  :(

Anyway, thanks a lot, once again,

Jones

On Mon, Sep 3, 2018 at 5:39 AM Eugen Block <eblock@xxxxxx> wrote:

Hi Jones,

I still don't think creating an OSD on a partition will work. The
reason is that SES creates an additional partition per OSD resulting
in something like this:

vdb               253:16   0    5G  0 disk
├─vdb1            253:17   0  100M  0 part /var/lib/ceph/osd/ceph-1
└─vdb2            253:18   0  4,9G  0 part

Even with external block.db and wal.db on additional devices you would
still need two partitions for the OSD. I'm afraid with your setup this
can't work.

Regards,
Eugen




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux