Re: Add filestore based osd to a luminous cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Paul and Alfredo,

Downgrading to ceph-deploy 1.5.38 did not work either. I labeled the journal partition (i.e. /dev/nvme0n1p12) with parted, and it added a gpt partuuid to this specific partition:

/dev/nvme0n1p12: PTUUID="8a775205-1364-43d9-820e-c4d3a0d9f9e3" PTTYPE="gpt" PARTLABEL="ceph journal" PARTUUID="0fdff4d6-2833-4e6e-a832-9fb2452bc396”

Then I retried with ceph-deploy 2.0 with the following command and it succeeded. Before labeling as gpt with parted, it failed. 

ceph-deploy osd create --filestore --journal /dev/nvme0n1p12 --data /dev/sdl ank-ceph10

If anyone encounters a problem like this, this solution worked for me. FYI.

My best,

Huseyin
On 11 Jul 2018 20:31 +0300, Alfredo Deza <adeza@xxxxxxxxxx>, wrote:
On Wed, Jul 11, 2018 at 12:57 PM, Huseyin Cotuk <hcotuk@xxxxxxxxx> wrote:
Hi Paul,

Thanks for your reply. I did not mention any special parameter while
upgrading to luminous. So this ceph-deploy version is the one coming from
the official debian luminous repo.

That is because ceph-volume came out in Luminous and fully supports
most everything ceph-disk provided.

Another alternative is to allow ceph-volume to take over the
management of your ceph-disk OSDs, although that is not provided via
ceph-deploy.

In the meantime, you can still use ceph-disk in Luminous, but like
Paul mentioned it will require you to downgrade ceph-deploy to 1.5.39

I will try to downgrade ceph-deploy and try to add osd again. To prevent any
inconsistency, maybe you can check the version of ceph-deploy package on the
repository.

My best,
Huseyin
On 11 Tem 2018 19:49 +0300, Paul Emmerich <paul.emmerich@xxxxxxxx>, wrote:

BlueStore is really stable and mature nowadays.

You seem to be using ceph-deploy 2.0.0 which I would not call mature and
stable at the moment ;)
Anyways, it uses ceph-volume instead of ceph-disk and I think you have to
specify the actual partition here.
But I'd just downgrade to ceph-deploy 1.5.39 when running Luminous (not a
long-term solution as ceph-disk will
be removed in Nautilus)

Paul

2018-07-11 18:37 GMT+02:00 Huseyin Cotuk <hcotuk@xxxxxxxxx>:

Hello everybody,

I have just upgraded my ceph cluster from kraken to luminous. I just want
to go on with filestore based objectstore for OSDs until Redhat announces
bluestore as stable. It is still in technical preview.

So my question is: “What is the right procedure of adding an filestore
based OSD into the existing cluster with an NVME journal?"

My NVME journal contains ceph journal partitions for existing OSD created
with kraken.

root@ank-ceph10:~# lsblk /dev/nvme0n1
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 2.9T 0 disk
|-nvme0n1p1 259:1 0 40G 0 part
|-nvme0n1p2 259:2 0 40G 0 part
|-nvme0n1p3 259:3 0 40G 0 part
|-nvme0n1p4 259:4 0 40G 0 part
|-nvme0n1p5 259:5 0 40G 0 part
|-nvme0n1p6 259:6 0 40G 0 part
|-nvme0n1p7 259:7 0 40G 0 part
|-nvme0n1p8 259:8 0 40G 0 part
|-nvme0n1p9 259:9 0 40G 0 part
|-nvme0n1p10 259:10 0 40G 0 part
|-nvme0n1p11 259:11 0 40G 0 part
`-nvme0n1p12 259:12 0 40G 0 part

When i try to add a new osd with the following command,

ceph-deploy osd create --filestore --journal /dev/nvme0n1 --data /dev/sdl
ank-ceph10

I get the following error:

[ank-ceph10][INFO ] Running command: sudo /usr/sbin/ceph-volume --cluster
ceph lvm create --filestore --data /dev/sdl --journal /dev/nvme0n1
[ank-ceph10][WARNIN] --> RuntimeError: unable to use device
[ank-ceph10][DEBUG ] Running command: /usr/bin/ceph-authtool
--gen-print-key
[ank-ceph10][DEBUG ] Running command: /usr/bin/ceph --cluster ceph --name
client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i -
osd new 2d203f03-e547-4a8a-9140-53f48ed52e06
[ank-ceph10][DEBUG ] Running command: vgcreate --force --yes
ceph-35465726-457d-439d-9f59-a8a050f5a486 /dev/sdl
[ank-ceph10][DEBUG ] stderr: /run/lvm/lvmetad.socket: connect failed: No
such file or directory
[ank-ceph10][DEBUG ] WARNING: Failed to connect to lvmetad. Falling back
to internal scanning.
[ank-ceph10][DEBUG ] stdout: Physical volume "/dev/sdl" successfully
created
[ank-ceph10][DEBUG ] stdout: Volume group
"ceph-35465726-457d-439d-9f59-a8a050f5a486" successfully created
[ank-ceph10][DEBUG ] Running command: lvcreate --yes -l 100%FREE -n
osd-data-2d203f03-e547-4a8a-9140-53f48ed52e06
ceph-35465726-457d-439d-9f59-a8a050f5a486
[ank-ceph10][DEBUG ] stderr: /run/lvm/lvmetad.socket: connect failed: No
such file or directory
[ank-ceph10][DEBUG ] WARNING: Failed to connect to lvmetad. Falling back
to internal scanning.
[ank-ceph10][DEBUG ] stdout: Logical volume
"osd-data-2d203f03-e547-4a8a-9140-53f48ed52e06" created.
[ank-ceph10][DEBUG ] --> blkid could not detect a PARTUUID for device:
/dev/nvme0n1
[ank-ceph10][DEBUG ] --> Was unable to complete a new OSD, will rollback
changes
[ank-ceph10][DEBUG ] --> OSD will be fully purged from the cluster,
because the ID was generated
[ank-ceph10][DEBUG ] Running command: ceph osd purge osd.119
--yes-i-really-mean-it
[ank-ceph10][DEBUG ] stderr: purged osd.119
[ank-ceph10][ERROR ] RuntimeError: command returned non-zero exit status:
1
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-volume
--cluster ceph lvm create --filestore --data /dev/sdl --journal /dev/nvme0n1
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

AFAIU, blkid looks for PARTUUID for the journal device, but it does not
have one. Only partitions have. I do not want to format my journal device.
Any recommendation about this?

root@ank-ceph10:~# blkid /dev/nvme0n1*
/dev/nvme0n1: PTUUID="a6431404-5693-4076-98c9-ffbe84224e1b" PTTYPE="gpt"
/dev/nvme0n1p1: PARTLABEL="ceph journal"
PARTUUID="43440fab-a30f-4e42-9c15-35f375dde033"
/dev/nvme0n1p10: PARTLABEL="ceph journal"
PARTUUID="c9c9f459-98a1-4a6a-9350-9942a6fc02f6"
/dev/nvme0n1p11: PARTLABEL="ceph journal"
PARTUUID="3f64ddc1-ac5d-4b7b-ace3-ad35d44e4fd3"
/dev/nvme0n1p12: PARTLABEL="ceph journal"
PARTUUID="0fdff4d6-2833-4e6e-a832-9fb2452bc396"
/dev/nvme0n1p2: PARTLABEL="ceph journal"
PARTUUID="5ce0b4e8-3571-4297-974a-9ef648fac1a8"
/dev/nvme0n1p3: PARTLABEL="ceph journal"
PARTUUID="228cee11-06e3-4691-963a-77e74e099716"
/dev/nvme0n1p4: PARTLABEL="ceph journal"
PARTUUID="b7c09c3e-e4ae-42be-8686-5daf9e40c407"
/dev/nvme0n1p5: PARTLABEL="ceph journal"
PARTUUID="60d9115c-ebb1-4eaf-85ae-31379a5e9450"
/dev/nvme0n1p6: PARTLABEL="ceph journal"
PARTUUID="5a057b30-b697-4598-84c0-1794c608d70c"
/dev/nvme0n1p7: PARTLABEL="ceph journal"
PARTUUID="c22c272d-5b75-40ca-970e-87b1b303944c"
/dev/nvme0n1p8: PARTLABEL="ceph journal"
PARTUUID="ed9fd194-1490-42b1-a2b4-ae36b2a4f8ce"
/dev/nvme0n1p9: PARTLABEL="ceph journal"
PARTUUID="d5589315-4e47-49c4-91f5-48e1a55011d2"

While using kraken, I used to add OSDs with journals via the follwing
command:

ceph-deploy osd prepare ank-ceph10:sdl:/dev/nvme0n1


Thanks for any recommendation.

Best regards,

Dr. Huseyin COTUK

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux