slow backfilling / remapping

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello list,

today i was trying to look what happens if i remove an osd.

I've executed:
ceph osd 21 out

Cluster Details:
- XFS FS for OSDs
- Software version: latest next from today 12/01/2012
- 6 Nodes with 4 OSDs only SSDs each

I have just 8 GB of data so i thought removing an OSD should be VERY fast.

Here are parts of the logs whole process tooked 15 minutes.

2012-12-01 20:06:02.607191 mon.0 [INF] pgmap v59693: 7632 pgs: 7348 active+clean, 266 active+remapped+wait_backfill, 12 active+remapped+backfilling, 6 active+recovering; 7880 MB data, 18812 MB used, 4428 GB / 4446 GB avail; 180/4303 degraded (4.183%)

2012-12-01 20:16:11.284705 mon.0 [INF] pgmap v60000: 7632 pgs: 7562 active+clean, 65 active+remapped+wait_backfill, 5 active+remapped+backfilling; 7880 MB data, 18981 MB used, 4428 GB / 4446 GB avail; 44/4178 degraded (1.053%)

2012-12-01 20:22:35.829481 mon.0 [INF] pgmap v60182: 7632 pgs: 7628 active+clean, 4 active+remapped+backfilling; 7880 MB data, 19029 MB used, 4428 GB / 4446 GB avail; 3/4137 degraded (0.073%) 2012-12-01 20:22:37.052136 mon.0 [INF] pgmap v60183: 7632 pgs: 7632 active+clean; 7880 MB data, 19033 MB used, 4428 GB / 4446 GB avail

Greets,
Stefan
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux