Re: "converting" btrfs osds to xfs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 10, 2012 at 3:44 PM, Nick Bartos <nick@xxxxxxxxxxxxxxx> wrote:
> After I run the 'ceph osd out 123' command, is there a specific ceph
> command I can poll so I know when it's OK to kill the OSD daemon and
> begin the reformat process?

Good question! "ceph -s" will show you that. This is from a run where
I ran "ceph osd out 1" on a cluster of 3 osds. See the active+clean
counts going up and active+recovering counts going down, and the
"degraded" percentage dropping. The last line is an example of an "all
done" situation.

2012-05-10 17:19:47.376864    pg v144: 24 pgs: 14 active+clean, 10
active+recovering; 180 MB data, 100217 MB used, 173 GB / 285 GB avail;
88/132 degraded (66.667%)

2012-05-10 17:19:59.220607    pg v146: 24 pgs: 19 active+clean, 5
active+recovering; 180 MB data, 100227 MB used, 173 GB / 285 GB avail;
24/132 degraded (18.182%)

2012-05-10 17:20:16.522978    pg v148: 24 pgs: 24 active+clean; 180 MB
data, 100146 MB used, 173 GB / 285 GB avail

If you want a lower-level double-check, you can peek inside the osd
data directory and see that the "current" subdirectory has no *_head
entries, du is low, etc.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux