Re: ceph df shows 100% used

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You hosts are also not balanced in your default root.  Your failure domain is host, but one of your hosts has 8.5TB of storage in it compared to 26.6TB and 29.6TB.  You only have size=2 (along with min_size=1 which is bad for a lot of reasons) so it should still be able to place data mostly between ds01 and ds04 and ignore fe02 since it doesn't have much space at all.  Anyway, `ceph osd df` will be good output to see what the distribution between osds looks like.

 -1 64.69997 root default                                                    
 -2 26.59998     host mia1-master-ds01                                       
  0  3.20000         osd.0                      up  1.00000          1.00000 
  1  3.20000         osd.1                      up  1.00000          1.00000 
  2  3.20000         osd.2                      up  1.00000          1.00000 
  3  3.20000         osd.3                      up  1.00000          1.00000 
  4  3.20000         osd.4                      up  1.00000          1.00000 
  5  3.20000         osd.5                      up  1.00000          1.00000 
  6  3.70000         osd.6                      up  1.00000          1.00000 
  7  3.70000         osd.7                      up  1.00000          1.00000 
 -4  8.50000     host mia1-master-fe02                                       
  8  5.50000         osd.8                      up  1.00000          1.00000 
 23  3.00000         osd.23                     up  1.00000          1.00000 
 -7 29.59999     host mia1-master-ds04                                       
  9  3.70000         osd.9                      up  1.00000          1.00000 
 10  3.70000         osd.10                     up  1.00000          1.00000 
 11  3.70000         osd.11                     up  1.00000          1.00000 
 12  3.70000         osd.12                     up  1.00000          1.00000 
 13  3.70000         osd.13                     up  1.00000          1.00000 
 14  3.70000         osd.14                     up  1.00000          1.00000 
 15  3.70000         osd.15                     up  1.00000          1.00000 
 16  3.70000         osd.16                     up  1.00000          1.00000 


On Thu, Jan 18, 2018 at 5:05 PM David Turner <drakonstein@xxxxxxxxx> wrote:
`ceph osd df` is a good command for you to see what's going on.  Compare the osd numbers with `ceph osd tree`.

On Thu, Jan 18, 2018 at 5:03 PM David Turner <drakonstein@xxxxxxxxx> wrote:
You can have overall space available in your cluster because not all of your disks are in the same crush root.  You have multiple roots corresponding to multiple crush rulesets.  All pools using crush ruleset 0 are full because all of the osds in that crush rule are full.

On Thu, Jan 18, 2018 at 3:34 PM Webert de Souza Lima <webert.boss@xxxxxxxxx> wrote:
Sorry I forgot, this is a ceph jewel 10.2.10


Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
Belo Horizonte - Brasil
IRC NICK - WebertRLZ
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux