Re: 3-node Ceph with DAS storage and multipath

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Internal

It is not really a problem to use Synergy in this case, but you must remember that it goes against the principles of Ceph.  

Ceph is built on the idea that you use (relative) inexpensive hardware, and redundancy comes from the fact that data is replicated/erasure coded over independent nodes.  When you use synergy or any other blade infrastructure you better keep in mind that while the hardware should be redundant, there are quite a lot of components that all your Ceph nodes will share.  And that is not only hardware! In my experience the weak points of blade infra is mostly in the very complex firmware running on all these components.  Especially firmware upgrades of virtual connects is something I won't touch with a ten foot pole anymore, let alone if a full Ceph cluster is running on a single pair of VCs...

For both your POC and your production you certainly need to use multipathd like you said. If you don't, your entire cluster will fail at once if the backplane of those drives fails.  But also firmware updates will impact your entire cluster if there is no multipath, since it does one backplane at a time.  Multipathd works pretty flawless in my experience.  You first create the multipath devices, and let Ceph use the multipath devices instead of the 2 /dev/sd* devices

That being said, since you can build a good Ceph cluster with cheap hardware, you can certainly do it with overly expensive hardware.  If you really want to use these blades for a production Ceph cluster, I would at least start with 3 enclosures and one compute node per enclosure. Define these as different racks in Ceph and use failure domain = rack.  That way a single enclosure problem cannot fail your entire cluster and you can do firmware upgrades without the panic attacks 😊

Mvg,

Dieter Roels

-----Original Message-----
From: Kostadin Bukov <kostadin.bukov@xxxxxxxxxxxx> 
Sent: maandag 23 mei 2022 12:08
To: ceph-users@xxxxxxx
Subject:  Re: 3-node Ceph with DAS storage and multipath

For a shared storage we use multipathd (multipath daemon) and configure two raw devices for example /dev/sda (this is the block device 1 visible via Path A) and dev/sdb (this is the block device 1 visible via Path B) into a multipath device with name for example diskmpath1. When we make the multipath configuration will appear device '/dev/mapper/diskmpath1', and on top of this '/dev/mapper/diskmpath1' we create physical volume for example.

[root@comp1 ~]# pvcreate /dev/mapper/diskmpath1
  Physical volume "/dev/mapper/diskmpath1" successfully created

So I'm to find out if this multipath is also correct to be used for Ceph deployment.
Once I figure out how to proceed with multipath devices my plan is to install Ceph using official install method:
- Cephadm installs and manages a Ceph cluster using containers and systemd, with tight integration with the CLI and dashboard GUI.

Regards,
Kosta 

Disclaimer <https://www.kbc.com/KBCmailDisclaimer>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux