Re: Disk consume for CephFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, I know this option isn't safe, however, in my current situation, I can't increase it.

I probably have some files under 4K, however, when I cleaned zero files I didn't saw any changes in statistics. My current `ceph df details` below:

# ceph df detail
--- RAW STORAGE ---
CLASS  SIZE     AVAIL    USED     RAW USED  %RAW USED
ssd    8.0 TiB  1.6 TiB  6.3 TiB   6.4 TiB      80.32
TOTAL  8.0 TiB  1.6 TiB  6.3 TiB   6.4 TiB      80.32

--- POOLS ---
POOL                   ID  STORED   (DATA)   (OMAP)   OBJECTS  USED     (DATA)   (OMAP)   %USED  MAX AVAIL  QUOTA OBJECTS  QUOTA BYTES  DIRTY   USED COMPR  UNDER COMPR
device_health_metrics   1      0 B      0 B      0 B        0      0 B      0 B      0 B      0    126 GiB  N/A            N/A               0         0 B          0 B

station_data            9  1.6 TiB  1.6 TiB      0 B   27.90M  6.3 TiB  6.3 TiB      0 B  94.44    190 GiB  N/A            2.5 TiB      27.90M         0 B          0 B
station_data_metadata  10   15 GiB  178 MiB   15 GiB   82.11k   30 GiB  356 MiB   29 GiB   7.24    190 GiB  N/A            8 GiB        82.11k         0 B          


As you can see, the field STORED is 1.6TB. However, the DATA is 6.3. 
Does it possible to determine why the files consume so many spaces? Is I am wrong, when I calculate that size=2 means STORED*2, hence the DATA should be 3.2TB?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux