meta values on nvme class OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Ceph version 12.2.13. 

Ceph status show ‘Health_Warn’ nearfull osd(s) and pool(s). No data in this nvme class pools and nvme osd’s, completely free. But meta values is so big in nvme osds. Why does meta values on OSDs shows huge disk usage?

$ ceph health detail

HEALTH_WARN 1 nearfull osd(s); 6 pool(s) nearfull
OSD_NEARFULL 1 nearfull osd(s)
   osd.49 is near full
POOL_NEARFULL 6 pool(s) nearfull
   pool 'VmImages_NVMe_FdChassis_Rp3' is nearfull
   pool '.rgw.root' is nearfull
   pool 'default.rgw.control' is nearfull
   pool 'default.rgw.meta' is nearfull
   pool 'default.rgw.log' is nearfull
   pool 'default.rgw.buckets.non-ec' is nearfull

--

$ ceph df --cluster ceph

GLOBAL:
   SIZE       AVAIL      RAW USED     %RAW USED
   501TiB     354TiB       147TiB         29.42 
POOLS:
   NAME                           			 ID     USED        %USED     MAX AVAIL     OBJECTS
   VmImages_SSD_FdHost_Rp3          	6      25.2TiB     33.26       50.6TiB     6990169
   VmImages_NVMe_FdChassis_Rp3      8          19B         0        242GiB           2
   VmImages_HDD_FdHost_Rp3          15     21.3TiB     35.33       39.0TiB     5755928
   .rgw.root                        			16     1.77KiB         0       16.3TiB          11
   default.rgw.control              			17          0B         0       16.3TiB           8
   default.rgw.meta                 			18     17.6KiB         0       16.3TiB          80
   default.rgw.log                  			19        162B         0       16.3TiB         210
   default.rgw.buckets.index        		20          0B         0       50.6TiB         252
   default.rgw.buckets.data         		21     95.8GiB      0.24       39.0TiB     1561373
   default.rgw.buckets.non-ec       		22          0B         0       16.3TiB           4
   default.rgw.buckets.data.ssd     		25          0B         0       50.6TiB           0


$ ceph osd df tree | grep nvme
ID   CLASS WEIGHT    REWEIGHT SIZE    USE     DATA    OMAP    META    AVAIL   %USE  VAR  PGS TYPE NAME
  0  nvme   0.36378   	1.00000  373GiB  313GiB 57.3MiB 25.8MiB  313GiB 59.0GiB 84.15 2.86  79                 osd.0           
  7  nvme   0.36378  	1.00000  373GiB  307GiB 57.4MiB 22.9MiB  307GiB 65.8GiB 82.32 2.80  70                 osd.7           
 14  nvme   0.36378  	1.00000  373GiB  313GiB 57.3MiB 25.9MiB  313GiB 59.3GiB 84.08 2.86  79                 osd.14          
 21  nvme   0.36378  	1.00000  373GiB  312GiB 57.4MiB 24.2MiB  312GiB 60.8GiB 83.68 2.85  78                 osd.21          
 28  nvme   0.36378  	1.00000  373GiB  316GiB 57.3MiB 29.8MiB  316GiB 56.4GiB 84.87 2.89  85                 osd.28          
 35  nvme   0.36378  	1.00000  373GiB  312GiB 57.4MiB 26.5MiB  312GiB 60.5GiB 83.75 2.85  80                 osd.35          
 42  nvme   0.36378  	1.00000  373GiB  311GiB 57.5MiB 25.5MiB  311GiB 61.9GiB 83.39 2.84  79                 osd.42          
 49  nvme   0.36378  	1.00000  373GiB  318GiB 57.3MiB 27.7MiB  318GiB 54.8GiB 85.28 2.90  90                 osd.49          
 56  nvme   0.36378  	1.00000  373GiB  303GiB 57.3MiB 20.2MiB  303GiB 69.5GiB 81.33 2.77  71                 osd.56          
 63  nvme   0.36378  	1.00000  373GiB  309GiB 57.4MiB 28.5MiB  309GiB 63.2GiB 83.02 2.82  82                 osd.63          
 70  nvme   0.36378  	1.00000  373GiB  312GiB 57.3MiB 26.2MiB  312GiB 60.5GiB 83.76 2.85  88                 osd.70          
 77  nvme   0.36378  	1.00000  373GiB  299GiB 57.4MiB 22.7MiB  299GiB 73.6GiB 80.24 2.73  69                 osd.77          
 84  nvme   0.36378  	1.00000  373GiB  312GiB 57.4MiB 31.7MiB  312GiB 60.2GiB 83.84 2.85  76                 osd.84          
 91  nvme   0.36378  	1.00000  373GiB  301GiB 57.3MiB 21.1MiB  301GiB 71.5GiB 80.80 2.75  76                 osd.91          
 98  nvme   0.36378  	1.00000  373GiB  285GiB 57.3MiB 15.7MiB  285GiB 87.6GiB 76.47 2.60  54                 osd.98          
105  nvme   0.36378  	1.00000  373GiB  308GiB 57.3MiB 30.8MiB  308GiB 64.6GiB 82.67 2.81  86                 osd.105         
112  nvme   0.36378  	1.00000  373GiB  264GiB 57.3MiB 26.0MiB  264GiB  108GiB 70.99 2.42  84                 osd.112         
119  nvme   0.36378  	1.00000  373GiB  297GiB 57.3MiB 23.6MiB  297GiB 75.1GiB 79.85 2.72  74                 osd.119         
126  nvme   0.36378  	1.00000  373GiB  292GiB 57.3MiB 22.8MiB  292GiB 80.2GiB 78.48 2.67  68                 osd.126         
133  nvme   0.36378  	1.00000  373GiB  294GiB 57.3MiB 24.1MiB  294GiB 78.7GiB 78.87 2.68  70                 osd.133


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux