About available space ceph blue in store

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Team,


            We have 9 OSD and when we run ceph osd df  its showing  TOTAL  SIZE 31 TiB  USE :- 13 TiB  AVAIL :- 18 TiB  %USE:- 42.49. When checked in client machine its showing Size :- 14T  USE:- 6.5T AVAIL  6.6T  around 3TB its missing.  We are using replication size is 2 . Any one please help me to identify such a issue.


ceph osd tree
ID  CLASS WEIGHT   TYPE NAME              STATUS REWEIGHT PRI-AFF
-1       31.43600 root default                                  
-13        3.49309     host download-osd1                        
  0   ssd  3.49309         osd.0              up  1.00000 1.00000
-3        3.49309     host download-osd2                        
  1   ssd  3.49309         osd.1              up  1.00000 1.00000
-5        3.49309     host download-osd3                        
  2   ssd  3.49309         osd.2              up  1.00000 1.00000
-7        3.49309     host download-osd4                        
  3   ssd  3.49309         osd.3              up  1.00000 1.00000
-9        3.49309     host download-osd5                        
  4   ssd  3.49309         osd.4              up  1.00000 1.00000
-11        3.49309     host download-osd6                        
  5   ssd  3.49309         osd.5              up  1.00000 1.00000
-22        3.49249     host download-osd7                        
  6   hdd  3.49249         osd.6              up  1.00000 1.00000
-25        3.49249     host download-osd8                        
  7   hdd  3.49249         osd.7              up  1.00000 1.00000
-28        3.49249     host download-osd9                        
  8   hdd  3.49249         osd.8              up  1.00000 1.00000


# ceph osd df
ID CLASS WEIGHT  REWEIGHT SIZE    USE     AVAIL   %USE  VAR  PGS
0   ssd 3.49309  1.00000 3.5 TiB 1.3 TiB 2.2 TiB 37.58 0.88  68
1   ssd 3.49309  1.00000 3.5 TiB 1.4 TiB 2.1 TiB 38.93 0.92  81
2   ssd 3.49309  1.00000 3.5 TiB 1.5 TiB 2.0 TiB 42.10 0.99  78
3   ssd 3.49309  1.00000 3.5 TiB 1.9 TiB 1.6 TiB 53.06 1.25  90
4   ssd 3.49309  1.00000 3.5 TiB 1.4 TiB 2.1 TiB 39.81 0.94  78
5   ssd 3.49309  1.00000 3.5 TiB 1.5 TiB 2.0 TiB 42.06 0.99  81
6   hdd 3.49249  1.00000 3.5 TiB 1.5 TiB 2.0 TiB 43.80 1.03  81
7   hdd 3.49249  1.00000 3.5 TiB 1.4 TiB 2.0 TiB 41.39 0.97  77
8   hdd 3.49249  1.00000 3.5 TiB 1.5 TiB 2.0 TiB 43.70 1.03  78
                    TOTAL  31 TiB  13 TiB  18 TiB 42.49     



Client Mount 

192.168.x.x,192.168.x.x,192.168.x.x:6789:/ ceph       14T  6.5T  6.6T  50% /data/build/downloads



Regards
Prabu GJ

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux