Ceph is Full

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Emergency Help!

One of ceph cluster is full, and ceph -s returns:
[root@controller ~]# ceph -s
    cluster 059f27e8-a23f-4587-9033-3e3679d03b31
     health HEALTH_ERR 20 pgs backfill_toofull; 20 pgs degraded; 20 pgs stuck unclean; recovery 7482/129081 objects degraded (5.796%); 2 full osd(s); 1 near full osd(s)
     osdmap e2743: 3 osds: 3 up, 3 in
            flags full
      pgmap v6564199: 320 pgs, 4 pools, 262 GB data, 43027 objects
            786 GB used, 47785 MB / 833 GB avail
            7482/129081 objects degraded (5.796%)
                 300 active+clean
                  20 active+degraded+remapped+backfill_toofull

Then I try to remove some volume, and I got:
[root@controller ~]# rbd -p volumes rm volume-c55fd052-212d-4107-a2ac-cf53bfc049be
2015-04-29 05:31:31.719478 7f5fb82f7760  0 client.4781741.objecter  FULL, paused modify 0xe9a9e0 tid 6

Please give me some tip for this, Thanks a lot.


Best Regards
-- Ray
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux