CephFS ghost usage/inodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

When we tried putting some load on our test cephfs setup by restoring a
backup in artifactory, we eventually ran out of space (around 95% used
in `df` = 3.5TB) which caused artifactory to abort the restore and clean
up. However, while a simple `find` no longer shows the files, `df` still
claims that we have around 2.1TB of data on the cephfs. `df -i` also
shows 2.4M used inodes. When using `du -sh` on a top-level mountpoint, I
get 31G used, which is data that is still really here and which is
expected to be here.

Consequently, we also get the following warning:

> MANY_OBJECTS_PER_PG 1 pools have many more objects per pg than average
>     pool cephfs_data objects per pg (38711) is more than 231.802 times cluster average (167)

We are running ceph 14.2.5.

We have snapshots enabled on cephfs, but there are currently no active
snapshots listed by `ceph daemon mds.$hostname dump snaps --server` (see
below). I can't say for sure if we created snapshots during the backup
restore.

> {
>     "last_snap": 39,
>     "last_created": 38,
>     "last_destroyed": 39,
>     "pending_noop": [],
>     "snaps": [],
>     "need_to_purge": {},
>     "pending_update": [],
>     "pending_destroy": []
> }

We only have a single CephFS.

We use the pool_namespace xattr for our various directory trees on the
cephfs.

`ceph df` shows:

> POOL         ID STORED   OBJECTS   USED    %USED     MAX AVAIL
> cephfs_data  6  2.1 TiB  2.48M     2.1 TiB 24.97       3.1 TiB

`ceph daemon mds.$hostname perf dump | grep stray` shows:

> "num_strays": 0,
> "num_strays_delayed": 0,
> "num_strays_enqueuing": 0,
> "strays_created": 5097138,
> "strays_enqueued": 5097138,
> "strays_reintegrated": 0,
> "strays_migrated": 0,

`rados -p cephfs_data df` shows:

> POOL_NAME      USED OBJECTS CLONES  COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED   RD_OPS      RD   WR_OPS     WR USED COMPR UNDER COMPR
> cephfs_data 2.1 TiB 2477540      0 4955080                  0       0        0 10699626 6.9 TiB 86911076 35 TiB        0 B         0 B
> 
> total_objects    29718
> total_used       329 GiB
> total_avail      7.5 TiB
> total_space      7.8 TiB

When I combine the usage and the free space shown by `df` we would
exceed our cluster size. Our test cluster currently has 7.8TB total
space with a replication size of 2 for all pools. With 2.1TB
"used" on the cephfs according to `df` + 3.1TB being shows as "free" I
get 5.2TB total size. This would mean >10TB of data when accounted for
replication. Clearly this can't fit on a cluster with only 7.8TB of
capacity.

Do you have any ideas why we see so many objects and so much reported
usage? Is there any way to fix this without recreating the cephfs?

Florian

-- 
Florian Pritz

Research Industrial Systems Engineering (RISE) Forschungs-,
Entwicklungs- und Großprojektberatung GmbH
Concorde Business Park F
2320 Schwechat
Austria

E-Mail: florian.pritz@xxxxxxxxxxxxxx
Web: www.rise-world.com

Firmenbuch: FN 280353i
Landesgericht Korneuburg
UID: ATU62886416

Attachment: signature.asc
Description: PGP signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux