On Tue, Apr 23, 2013 at 12:49 AM, Marco Aroldi <marco.aroldi@xxxxxxxxx> wrote: > Hi, > this morning I have this situation: > health HEALTH_WARN 1540 pgs backfill; 30 pgs backfill_toofull; 113 > pgs backfilling; 43 pgs degraded; 38 pgs peering; 5 pgs recovering; > 484 pgs recovery_wait; 38 pgs stuck inactive; 2180 pgs stuck unclean; > recovery 2153828/21551430 degraded (9.994%); noup,nodown flag(s) set > monmap e1: 3 mons at > {m1=192.168.21.11:6789/0,m2=192.168.21.12:6789/0,m3=192.168.21.13:6789/0}, > election epoch 50, quorum 0,1,2 m1,m2,m3 > osdmap e34624: 62 osds: 62 up, 62 in > pgmap v1496556: 17280 pgs: 15098 active+clean, 1471 > active+remapped+wait_backfill, 9 active+degraded+wait_backfill, 30 > active+remapped+wait_backfill+ > backfill_toofull, 462 > active+recovery_wait, 18 peering, 109 active+remapped+backfilling, 1 > active+clean+scrubbing, 30 active+degraded+remapped+wait_backfill, 22 > active+recovery_wait+remapped, 20 remapped+peering, 4 > active+degraded+remapped+backfilling, 1 active+clean+scrubbing+deep, 5 > active+recovering; 50432 GB data, 76489 GB used, 36942 GB / 110 TB > avail; 2153828/21551430 degraded (9.994%) > mdsmap e52: 1/1/1 up {0=m1=up:active}, 2 up:standby > > No data movement > The cephfs mounts works but many many directories are inaccessible: > the clients hangs with just a simple "ls" > > ceph -w repeat to log these lines: http://pastebin.com/AN01wgfV > > What can I do to get better? As before, you need to get your RADOS cluster healthy. That's a fairly unpleasant task once it manages to get full; you basically need to carefully order what data moves where, when. Sometimes deleting extra copies of known-healthy data can help. But it's not the sort of thing we can do over the mailing list; I suggest you read the OSD operations docs carefully and then make some careful changes. If you can bring in temporary extra capacity that would help too. -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com