Re: network change

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 8, 2018 at 3:50 PM, James Mauro <jmauro@xxxxxxxxxx> wrote:
> (newbie warning - my first go-round with ceph, doing a lot of reading)
>
> I have a small Ceph cluster, four storage nodes total, three dedicated to
> data (OSD’s) and one for metadata. One client machine.
>
> I made a network change. When I installed and configured the cluster, it was
> done
> using the system’s 10Gb interface information. I now have everything on a
> 100Gb network (IB in Ethernet mode).
>
> My question is, what is the most expedient way for me to change the ceph
> config
> such that all nodes are using the 100Gb network? Can I shut down the
> cluster,
> edit one or more .conf files and restart, or do I need to re-configure from
> scratch?

The key part is changing the monitors addresses:
http://docs.ceph.com/docs/luminous/rados/operations/add-or-rm-mons/#changing-a-monitor-s-ip-address

Once your mons are happy with their new addresses, you can just update
ceph.conf for the rest of the services (including OSDs).

By the way, I notice you're an 11.x version, which is EOL.  It would
be wise to update your cluster to the 12.x ("luminous") stable series
before doing the address updates; that way if you run into any issues
you'll be using a version that's better tested and more familiar to
everyone.

John

>
> Thanks
> Jim
>
>
> cepher@srv-01:~$ sudo ceph --version
> ceph version 11.2.1 (e0354f9d3b1eea1d75a7dd487ba8098311be38a7)
> cepher@srv-01:~$ sudo ceph -s
>     cluster f201e454-9c73-4b29-abe1-48dd609266a6
>      health HEALTH_OK
>      monmap e4: 3 mons at
> {dgx-srv-04=10.33.3.46:6789/0,dgx-srv-05=10.33.3.48:6789/0,dgx-srv-06=10.33.3.50:6789/0}
>             election epoch 12, quorum 0,1,2 dgx-srv-04,dgx-srv-05,dgx-srv-06
>       fsmap e5: 1/1/1 up {0=dgx-srv-03=up:active}
>         mgr active: dgx-srv-06 standbys: dgx-srv-04, dgx-srv-05
>      osdmap e114: 18 osds: 18 up, 18 in
>             flags sortbitwise,require_jewel_osds,require_kraken_osds
>       pgmap v7946: 3072 pgs, 3 pools, 2148 bytes data, 20 objects
>             99684 MB used, 26717 GB / 26814 GB avail
>                 3072 active+clean
> cepher@srv-01:~$ uname -a
> Linux srv-01 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018
> x86_64 x86_64 x86_64 GNU/Linux
> ________________________________
> This email message is for the sole use of the intended recipient(s) and may
> contain confidential information.  Any unauthorized review, use, disclosure
> or distribution is prohibited.  If you are not the intended recipient,
> please contact the sender by reply email and destroy all copies of the
> original message.
> ________________________________
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux