Re: Multipath and cephadm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks, Peter, this works.

Before, I had the impression cephadm would only accept 'bare' disks as osd devices, but indeed it will swallow any kind of block device or LV that you prepare for it on the osd host.

Regards,
Thomas

On 1/25/22 20:21, Peter Childs wrote:
This came from a previous thread that I started last year, so you may want
to look in the archive.

https://www.mail-archive.com/ceph-users@xxxxxxx/msg11572.html

Although the doc page it refers to looks to have disappeared :(

You can use "ceph orch daemon add osd <host>:<path to multipath device>"

I've been using

1=mpatha

pvcreate /dev/mapper/$1
vgcreate $1-vg /dev/mapper/$1
lvcreate -l 100%FREE -n $1-lv $1-vg
ceph orch daemon add osd dampwood48:$1-vg/$1-lv

to create osd's on the multipath devices terrible dev opt script but it
works.

I've not currently got a method to use the yaml device description method
(which would be much more ideal),  hence there is no obvious way to use
separate db_devices, but this does look to work for me as far as it goes.

Hope that helps

Peter Childs




On Tue, 25 Jan 2022, 17:53 Thomas Roth, <t.roth@xxxxxx> wrote:

Would like to know that as well.

I have the same setup - cephadm, Pacific, CentOS8, and a host with a
number of HDDs which are all connect by 2 paths.
No way to use these without multipath

  > ceph orch daemon add osd serverX:/dev/sdax

  > Cannot update volume group ceph-51f8b9b0-2917-431d-8a6d-8ff90440641b
with duplicate PV devices

(because sdax == sdce, etc.)

and with multipath, it fails with

  > ceph orch daemon add osd serverX:/dev/mapper/mpathbq

  > podman: stderr -->  IndexError: list index out of range


Quite strange that the 'future of storage' does not know how to handle
multipath devices?

Regrads,
Thomas


On 12/23/21 18:40, Michal Strnad wrote:
Hi all.

We have problem using disks accessible via multipath. We are using
cephadm for deployment, Pacific version for containers, CentOS 8 Stream on
servers
and following LVM configuration.

devices {
          multipath_component_detection = 1
}



We tried several methods.

1.) Direct approach.

cephadm shell

/mapper/mpatha

Errors are attached in 1.output file.



2.  With the help of OSD specifications where they are mpathX devices
used.

service_type: osd
service_id: osd-spec-serverX
placement:
    host_pattern: 'serverX'
spec:
    data_devices:
      paths:
        - /dev/mapper/mpathaj
        - /dev/mapper/mpathan
        - /dev/mapper/mpatham
    db_devices:
      paths:
        - /dev/sdc
encrypted: true

Errors are attached in 2.output file.


2.  With the help of OSD specifications where they are dm-X devices used.

service_type: osd
service_id: osd-spec-serverX
placement:
    host_pattern: 'serverX'
spec:
    data_devices:
      paths:
        - /dev/dm-1
        - /dev/dm-2
        - /dev/dm-3
        - /dev/dm-X
    db_devices:
      size: ':2TB'
encrypted: true

Errors are attached in 3.output file.

What is the right method for multipath deployments? I didn't find much
on this topic.

Thank you

Michal

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


--
--------------------------------------------------------------------
Thomas Roth
HPC Department

GSI Helmholtzzentrum für Schwerionenforschung GmbH
Planckstr. 1, 64291 Darmstadt, http://www.gsi.de/

Gesellschaft mit beschraenkter Haftung

Sitz der Gesellschaft / Registered Office:                    Darmstadt
Handelsregister       / Commercial Register:
                                          Amtsgericht Darmstadt, HRB 1528

Geschaeftsfuehrung    / Managing Directors:
           Professor Dr. Paolo Giubellino, Ursula Weyrich, Jörg Blaurock

Vorsitzender des GSI-Aufsichtsrates /
    Chairman of the Supervisory Board:
                     Staatssekretaer / State Secretary Dr. Georg Schütte
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux