Re: Migrating from block to lvm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Losing a node is not a big deal for us (dual bonded 10G connection to each node).

 

I’m thinking:

  1. Drain node
  2. Redeploy with Ceph Ansible

 

It would require much less hands-on time for our group.

 

I know the churn on the cluster would be high, which was my only concern.

 

Mike

 

 

Senior Systems Administrator

Research Computing Services Team

University of Victoria

 

From: Martin Verges <martin.verges@xxxxxxxx>
Date: Friday, November 15, 2019 at 11:52 AM
To: Janne Johansson <icepic.dz@xxxxxxxxx>
Cc: Cave Mike <mcave@xxxxxxx>, ceph-users <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: [ceph-users] Migrating from block to lvm

 

I would consider doing it host-by-host wise, as you should always be able to handle the complete loss of a node. This would be much faster in the end as you save a lot of time not migrating data back and forth. However this can lead to problems if your cluster is not configured according to the hardware performance given.

 

--
Martin Verges
Managing director

Mobile: +49 174 9335695
E-Mail: martin.verges@xxxxxxxx
Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx

 

 

Am Fr., 15. Nov. 2019 um 20:46 Uhr schrieb Janne Johansson <icepic.dz@xxxxxxxxx>:

Den fre 15 nov. 2019 kl 19:40 skrev Mike Cave <mcave@xxxxxxx>:

So would you recommend doing an entire node at the same time or per-osd?

 

You should be able to do it per-OSD (or per-disk in case you run more than one OSD per disk), to minimize data movement over the network, letting other OSDs on the same host take a bit of the load while re-making the disks one by one. You can use "ceph osd reweight <number> 0.0" to make the particular OSD release its data but still claim it supplies $crush-weight to the host, meaning the other disks will have to take its data more or less.

Moving data between disks in the same host usually goes lots faster than over the network to other hosts.

 

--

May the most significant bit of your life be positive.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux