Re: "converting" btrfs osds to xfs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



After I run the 'ceph osd out 123' command, is there a specific ceph
command I can poll so I know when it's OK to kill the OSD daemon and
begin the reformat process?

On Tue, May 8, 2012 at 12:38 PM, Sage Weil <sage@xxxxxxxxxxxx> wrote:
> On Tue, 8 May 2012, Tommi Virtanen wrote:
>> On Tue, May 8, 2012 at 8:39 AM, Nick Bartos <nick@xxxxxxxxxxxxxxx> wrote:
>> > I am considering converting some OSDs to xfs (currently running btrfs)
>> > for stability reasons.  I have a couple of ideas for doing this, and
>> > was hoping to get some comments:
>> >
>> > Method #1:
>> > 1.  Check cluster health and make sure data on a specific OSD is
>> > replicated elsewhere.
>> > 2.  Bring down the OSD
>> > 3.  Reformat it to xfs
>> > 4.  Restart OSD
>> > 5.  Repeat 1-4 until all btrfs OSDs have been converted.
>> ...
>> > Obviously #1 seems much more appetizing, but unfortunately I can't
>> > seem to find out how to verify that data on a specific OSD is
>> > replicated elsewhere.  I could go off general cluster health, but that
>> > seems more error prone.
>>
>> You can set the osd weight in crush to 0 and wait for the files inside
>> the osd data dir to disappear. If you want to control how much
>
> You can also just mark the osd 'out' without touching the CRUSH map;
> that'll be easier and a bit more efficient wrt data movement:
>
>        ceph osd out 123
>
> When the one comes back, you'll need to
>
>        ceph osd in 123
>
> sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux