Dear All,
We have just built a new cluster using Quincy 17.2.1
After copying ~25TB to the cluster (from a mimic cluster), we see 152 TB
used, which is ~6x disparity.
Is this just a ceph accounting error, or is space being wasted?
[root@wilma-s1 ~]# du -sh /cephfs2/users
24T /cephfs2/users
[root@wilma-s1 ~]# ls -lhd /cephfs2/users
drwxr-xr-x 240 root root 24T Jul 19 12:09 /cephfs2/users
[root@wilma-s1 ~]# df -h /cephfs2/users
Filesystem Size Used Avail Use% Mounted on
(SNIP):/ 7.1P 152T 6.9P 3% /cephfs2
root@wilma-s1 ~]# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 7.0 PiB 6.9 PiB 151 TiB 151 TiB 2.10
ssd 2.7 TiB 2.7 TiB 11 GiB 11 GiB 0.38
TOTAL 7.0 PiB 6.9 PiB 151 TiB 151 TiB 2.10
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
.mgr 21 32 90 MiB 24 270 MiB 0 2.2 PiB
mds_ssd 22 32 1.0 GiB 73.69k 3.0 GiB 0.11 881 GiB
ec82pool 23 4096 20 TiB 6.28M 25 TiB 0.38 5.2 PiB
primary_fs_data 24 32 0 B 1.45M 0 B 0 881 GiB
cephfs is using a 8+2 erasure coded data pool (hdd with NVMe db/wal),
and a 3x replicated default data pool (primary_fs_data - NVMe)
bluestore_min_alloc_size_hdd is 4096
ceph pool set ec82pool compression_algorithm lz4
ceph osd pool set ec82pool compression_mode aggressive
many thanks for any help
Jake
--
Dr Jake Grimmett
Head Of Scientific Computing
MRC Laboratory of Molecular Biology
Francis Crick Avenue,
Cambridge CB2 0QH, UK.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx