3-node Ceph with DAS storage and multipath

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello ceph-users,

 Recently I started preparing 3-node Ceph cluster (on bare metal hardware)
 We have the HW configuration ready - 3 servers HPE Synergy 480 Gen10
 Compute Module, each server with 2xCPUs Intel Xeon-Gold 6252
 (2.1GHz/24-core), 192GB RAM, 2x300GB HDD for OS RHEL 8.6 (already
 installed) and we have DAS (direct-attached-storage) with 18 x 1.6TB SSD
 drives inside. I attached 6 x1.6TB SSD from the DAS to each of the 3
 servers (as JBOD).
 Now I can see these 6 SSDs as 12 devices because the DAS storage has two
 paths for redundancy to each of the disks (sda, sdb, sdc, sdd, sde, sdf,
 sdg, sdh, sdi, sdj, sdk, sdl).
 I'm not sure how to handle the DAS storage multipath properly and according
 to best practices.
 For installation I will use cephadm with latest Ceph release Pacific 16.2.7
 My question is shall I configure multipath from RHEL 8.6 OS in advance (for
 example sda+sdbb=md0) or I should leave cephadm to handle the multipath by
 itself?
 This is grey area to me now and I will be thankful if somebody share
 his/her epxerience.

 Here is output from lsblk:

 [root@compute1 ~]# lsblk
 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
 sda 8:0 0 1.5T 0 disk
 sdb 8:16 0 1.5T 0 disk
 sdc 8:32 0 1.5T 0 disk
 sdd 8:48 0 1.5T 0 disk
 sde 8:64 0 1.5T 0 disk
 sdf 8:80 0 1.5T 0 disk
 sdg 8:96 0 1.5T 0 disk
 sdh 8:112 0 1.5T 0 disk
 sdi 8:128 0 1.5T 0 disk
 sdj 8:144 0 1.5T 0 disk
 sdk 8:160 0 1.5T 0 disk
 sdl 8:176 0 1.5T 0 disk
 sdn 8:208 0 279.4G 0 disk
 |-sdn1 8:209 0 953M 0 part /boot/efi
 |-sdn2 8:210 0 953M 0 part /boot
 |-sdn3 8:211 0 264.5G 0 part
 | |-rhel_compute1-root 253:0 0 18.6G 0 lvm /
 | |-rhel_compute1-var_log 253:2 0 9.3G 0 lvm /var/log
 | |-rhel_compute1-var_tmp 253:3 0 4.7G 0 lvm /var/tmp
 | |-rhel_compute1-tmp 253:4 0 4.7G 0 lvm /tmp
 | |-rhel_compute1-var 253:5 0 37.3G 0 lvm /var
 | |-rhel_compute1-opt 253:6 0 37.3G 0 lvm /opt
 | |-rhel_compute1-aux1 253:7 0 107.1G 0 lvm /aux1
 | |-rhel_compute1-home 253:8 0 20.5G 0 lvm /home
 | `-rhel_compute1-aux0 253:9 0 25.2G 0 lvm /aux0
 |-sdn4 8:212 0 7.5G 0 part [SWAP]
 `-sdn5 8:213 0 4.7G 0 part /var/log/audit
 sdo 8:224 0 1.5T 0 disk
 `-sdo1 8:225 0 1.5T 0 part
 `-rhel_local_VG-localstor 253:1 0 1.5T 0 lvm /localstor

 Regards,
 Kosta
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux