On Tue, 8 May 2012, Tommi Virtanen wrote: > On Tue, May 8, 2012 at 8:39 AM, Nick Bartos <nick@xxxxxxxxxxxxxxx> wrote: > > I am considering converting some OSDs to xfs (currently running btrfs) > > for stability reasons. I have a couple of ideas for doing this, and > > was hoping to get some comments: > > > > Method #1: > > 1. Check cluster health and make sure data on a specific OSD is > > replicated elsewhere. > > 2. Bring down the OSD > > 3. Reformat it to xfs > > 4. Restart OSD > > 5. Repeat 1-4 until all btrfs OSDs have been converted. > ... > > Obviously #1 seems much more appetizing, but unfortunately I can't > > seem to find out how to verify that data on a specific OSD is > > replicated elsewhere. I could go off general cluster health, but that > > seems more error prone. > > You can set the osd weight in crush to 0 and wait for the files inside > the osd data dir to disappear. If you want to control how much You can also just mark the osd 'out' without touching the CRUSH map; that'll be easier and a bit more efficient wrt data movement: ceph osd out 123 When the one comes back, you'll need to ceph osd in 123 sage > bandwidth is consumed for this transfer, you can also drop the weight > in e.g. 0.1 decrements. That should give you enough feedback from "du" > or "df" to be comfortable with the fact that your data is actually > moving elsewhere. > > I'd recommend completely removing the osd, and creating a new one; you > can even reuse the the osd id. Just don't try to copy the files over > from filesystem to another; the details of the btrfs interaction are > more low-level than what just a tar or cp can capture. > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html > >