Re: Quincy Ceph-orchestrator and multipath SAS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



CEPH configurations can be forced to use multipath but my experience is
that it is painful and manual at best. The orchestrator design criteria
supports low-cost/commodity hardware and multipath is a sophistication
not yet addressed. The orchestrator sees all of the available device paths
with no association and as a result it's not a good idea to use it for
device management in that environment. I've tried to construct a device
filter that looks like /dev/mpath* - but that doesn't work. You could try
to raise a feature request.

The good news is that once you have manually created a multipath OSD.. the
mainline OSD code recognizes and treats it appropriately - it knows the
relationship between "dm" and "mpath" devices. Just make sure that you use
a multipath device name when you create the device (LVM or otherwise)
passed to ceph-volume.

On Fri, May 12, 2023 at 11:17 AM Deep Dish <deeepdish@xxxxxxxxx> wrote:

> Hello,
>
> I have a few hosts about to add into a cluster that have a multipath
> storage config for SAS devices.    Is this supported on Quincy, and how
> would ceph-orchestrator and / or ceph-volume handle multipath storage?
>
> Here's a snip of lsblk output of a host in question:
>
> # lsblk
>
> NAME                  MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
>
> ...
>
> sdc                     8:32   0   9.1T  0 disk
>
> └─mpathh              253:4    0   9.1T  0 mpath
>
> sdd                     8:48   0   9.1T  0 disk
>
> └─mpathi              253:5    0   9.1T  0 mpath
>
> sde                     8:64   0   7.3T  0 disk
>
> └─mpathj              253:6    0   7.3T  0 mpath
>
> sdf                     8:80   0   7.3T  0 disk
>
> └─mpathl              253:7    0   7.3T  0 mpath
>
> sdg                     8:96   0   7.2T  0 disk
>
> └─mpathk              253:8    0   7.2T  0 mpath
>
> sdh                     8:112  0   7.3T  0 disk
>
> └─mpathe              253:9    0   7.3T  0 mpath
>
> sdi                     8:128  0   7.3T  0 disk
>
> └─mpathg              253:10   0   7.3T  0 mpath
>
> sdj                     8:144  0   7.3T  0 disk
>
> └─mpathf              253:11   0   7.3T  0 mpath
>
> sdk                     8:160  0   7.3T  0 disk
>
> └─mpathc              253:12   0   7.3T  0 mpath
>
> sdl                     8:176  0   7.3T  0 disk
>
> └─mpathd              253:13   0   7.3T  0 mpath
>
> ...
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux