Re: Multipath configuration for Ceph storage nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We use a puppet module to deploy them.  We give it devices to
configure from hiera data specific to our different types of storage
nodes.  The module is a fork from
https://github.com/openstack/puppet-ceph.

Ultimately the module ends up running 'ceph-disk prepare [arguments]
/dev/mapper/mpathXX /dev/nvmeXX'   (data dev, journal dev).

thanks,
Ben



On Wed, Jul 12, 2017 at 12:13 PM,  <bruno.canning@xxxxxxxxxx> wrote:
> Hi Ben,
>
> Thanks for this, much appreciated.
>
> Can I just check: Do you use ceph-deploy to create your OSDs? E.g.:
>
> ceph-deploy disk zap ceph-sn1.example.com:/dev/mapper/disk1
> ceph-deploy osd prepare ceph-sn1.example.com:/dev/mapper/disk1
>
> Best wishes,
> Bruno
>
>
> -----Original Message-----
> From: Benjeman Meekhof [mailto:bmeekhof@xxxxxxxxx]
> Sent: 11 July 2017 18:46
> To: Canning, Bruno (STFC,RAL,SC)
> Cc: ceph-users
> Subject: Re:  Multipath configuration for Ceph storage nodes
>
> Hi Bruno,
>
> We have similar types of nodes and minimal configuration is required (RHEL7-derived OS).  Install device-mapper-multipath or equivalent package, configure /etc/multipath.conf and enable 'multipathd'.  If working correctly the command 'multipath -ll' should output multipath devices and component devices on all paths.
>
> For reference, our /etc/multipath.conf is just these few lines:
>
> defaults {
>         user_friendly_names yes
>         find_multipaths yes
>     }
>
> thanks,
> Ben
>
> On Tue, Jul 11, 2017 at 10:48 AM,  <bruno.canning@xxxxxxxxxx> wrote:
>> Hi All,
>>
>>
>>
>> I’d like to know if anyone has any experience of configuring multipath
>> on ceph storage nodes, please. I’d like to know how best to go about it.
>>
>>
>>
>> We have a number of Dell PowerEdge R630 servers, each of which are
>> fitted with two SAS 12G HBA cards and each of which have two
>> associated Dell MD1400 storage units connected to them via HD-Mini -
>> HD-Mini cables, see the attached graphic (ignore colours: two direct
>> connections from the server to each storage unit, two connections running between each storage unit).
>>
>>
>>
>> Best wishes,
>>
>> Bruno
>>
>>
>>
>>
>>
>> Bruno Canning
>>
>> LHC Data Store System Administrator
>>
>> Scientific Computing Department
>>
>> STFC Rutherford Appleton Laboratory
>>
>> Harwell Oxford
>>
>> Didcot
>>
>> OX11 0QX
>>
>> Tel. +44 ((0)1235) 446621
>>
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux