Re: Migrating from block to lvm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Good points, thank you for the insight.

 

Given that I’m hosting the journals (wal/block.dbs) on ssds, would I need to do all the OSDs hosts on each journal ssd at the same time? I’m fairly sure this would be the case.

 

 

Senior Systems Administrator

Research Computing Services Team

University of Victoria

O: 250.472.4997

 

From: Janne Johansson <icepic.dz@xxxxxxxxx>
Date: Friday, November 15, 2019 at 11:46 AM
To: Cave Mike <mcave@xxxxxxx>
Cc: Paul Emmerich <paul.emmerich@xxxxxxxx>, ceph-users <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: [ceph-users] Migrating from block to lvm

 

Den fre 15 nov. 2019 kl 19:40 skrev Mike Cave <mcave@xxxxxxx>:

So would you recommend doing an entire node at the same time or per-osd?

 

You should be able to do it per-OSD (or per-disk in case you run more than one OSD per disk), to minimize data movement over the network, letting other OSDs on the same host take a bit of the load while re-making the disks one by one. You can use "ceph osd reweight <number> 0.0" to make the particular OSD release its data but still claim it supplies $crush-weight to the host, meaning the other disks will have to take its data more or less.

Moving data between disks in the same host usually goes lots faster than over the network to other hosts.

 

--

May the most significant bit of your life be positive.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux