Hi all,
thank you for your support, now the file system is not degraded any more. Now I have a minus degrading :-)
2014-10-21 10:15:22.303139 mon.0 [INF] pgmap v43376478: 3328 pgs: 3281 active+clean, 47 active+remapped; 1609 GB data, 5022 GB used, 1155 GB / 6178 GB avail; 8034B/s rd, 3548KB/s wr, 161op/s; -1638/1329293 degraded (-0.123%)
but ceph reports me a health HEALTH_WARN 47 pgs stuck unclean; recovery -1638/1329293 degraded (-0.123%)
I think this warning is reported because there are 47 active+remapped objects, some ideas how to fix that now?
Kind Regards Harald Roessler
I've been in a state where reweight-by-utilization was deadlocked (not the daemons, but the remap scheduling). After successive osd reweight commands, two OSDs wanted to swap PGs, but they were both toofull. I ended up temporarily increasing mon_osd_nearfull_ratio to 0.87. That removed the impediment, and everything finished remapping. Everything went smoothly, and I changed it back when all the remapping finished.
Just be careful if you need to get close to mon_osd_full_ratio. Ceph does greater-than on these percentages, not greater-than-equal. You really don't want the disks to get greater-than mon_osd_full_ratio, because all external IO will stop until you resolve that.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxxhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com