Hi all, I have one questions about osd degraded. My environment was set up with 3 mon, 2 mds and 25 osd, all daemon are running on physical machines. Ceph Version: 0.30 File system format: EXT4 Q1. When I'm testing osd stability, I kill an osd daemon (osd2) to observe the behavior of data migration. For all I know, I could see the pg degraded ratio down to zero, and all pgs become active+clean. but Ceph file system didn't migrate the data on the osd (which I already killed its daemon) to other osds. The degraded ratio still remain at 4.xxx%. After then, I run "cosd -i 2 -c /etc/ceph/ceph.conf"and then run "ceph -w", I saw all pgs suddenly become active+clean. Could anyone tell me why Ceph didn't migrate data when I kill one osd daemon? Thanks a lot~ ^__^ Best Regards, Anny
[global] pid file = /var/run/ceph/$name.pid ; debug ms =1 ; enable secure authentication ; auth supported = cephx [mon] mon data = /mon/mon$id debug mon = 0 [mon.a] host = MON1 mon addr = 192.168.10.1:6789 [mon.b] host = MON2 mon addr = 192.168.10.2:6789 [mon.c] host = MON3 mon addr = 192.168.10.3:6789 [mds] debug mds = 0 [mds.a] host = MDS1 [mds.b] host = MDS2 [osd] osd data = /mnt/ext4/osd$id osd journal = /data/osd$id/journal osd journal size = 512 ; journal size, in megabytes filestore btrfs snap = false filestore fsync flushes journal data = true debug osd = 0 [osd.0] host = OSD1 [osd.1] host = OSD2 [osd.2] host = OSD3 [osd.3] host = OSD4 [osd.4] host = OSD5 [osd.5] host = OSD6 [osd.6] host = OSD7 [osd.7] host = OSD8 [osd.8] host = OSD9 [osd.9] host = OSD10 [osd.10] host = OSD11 [osd.11] host = OSD12 [osd.12] host = OSD13 [osd.13] host = OSD14 [osd.14] host = OSD15 [osd.15] host = OSD16 [osd.16] host = OSD17 [osd.17] host = OSD18 [osd.18] host = OSD19 [osd.19] host = OSD20 [osd.20] host = OSD21 [osd.21] host = OSD22 [osd.22] host = OSD23 [osd.23] host = OSD24 [osd.24] host = OSD25