Hi David.
I can't find any special character and command file show ASCII text.
file osd-spec-serverX.yml
osd-spec-serverX.yml: ASCII text
Problem is probably elsewhere. Does anyone use multipath with cephadm?
Thank you
--
Michal Strnad
On 12/24/21 12:47 PM, David Caro wrote:
I did not really look deep, but by the last log it seems there's some
utf chars somewhere (Greek phi?) And the code is not handling it well
when logging, trying to use ASCII.
On Thu, 23 Dec 2021, 19:02 Michal Strnad, <michal.strnad@xxxxxxxxx
<mailto:michal.strnad@xxxxxxxxx>> wrote:
Hi all.
We have problem using disks accessible via multipath. We are using
cephadm for deployment, Pacific version for containers, CentOS 8 Stream
on servers and following LVM configuration.
devices {
multipath_component_detection = 1
}
We tried several methods.
1.) Direct approach.
cephadm shell ceph orch daemon add osd serverX:/dev/mapper/mpatha
Errors are attached in 1.output file.
2. With the help of OSD specifications where they are mpathX
devices used.
service_type: osd
service_id: osd-spec-serverX
placement:
host_pattern: 'serverX'
spec:
data_devices:
paths:
- /dev/mapper/mpathaj
- /dev/mapper/mpathan
- /dev/mapper/mpatham
db_devices:
paths:
- /dev/sdc
encrypted: true
Errors are attached in 2.output file.
2. With the help of OSD specifications where they are dm-X devices
used.
service_type: osd
service_id: osd-spec-serverX
placement:
host_pattern: 'serverX'
spec:
data_devices:
paths:
- /dev/dm-1
- /dev/dm-2
- /dev/dm-3
- /dev/dm-X
db_devices:
size: ':2TB'
encrypted: true
Errors are attached in 3.output file.
What is the right method for multipath deployments? I didn't find much
on this topic.
Thank you
Michal
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
<mailto:ceph-users@xxxxxxx>
To unsubscribe send an email to ceph-users-leave@xxxxxxx
<mailto:ceph-users-leave@xxxxxxx>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx