Upgrading the cluster to Ceph version 0.94.5 seems to have resolved the problem. TEMP data is now only a small fraction of the overall usage. On 09.11.2015 14:18, Jan Siersch wrote: > Hi, > > I am currently operating a multi-node Ceph cluster with the "Hammer" > release under CentOS 7 with writeback cache tiering on SSDs as described > here: > > http://docs.ceph.com/docs/master/rados/operations/cache-tiering/ > http://docs.ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds > > Over the last month the global capacity utilization as reported by "ceph > df" has increased to over 14% while the utilization of all pools only > sums up to <1%, and some of the OSDs are already at over 50% capacity. > While looking at one these OSDs I noticed that it is being filled up by > data in "_TEMP" folders: > > # pwd > /var/lib/ceph/osd/ceph-23/current > # du -sh * | sort -rh | head > 195G 2.71_TEMP > 90G 2.3c_TEMP > 79G 2.2b_TEMP > 57G 2.9_TEMP > 46G 2.49_TEMP > 19G 2.2d_TEMP > 12G 2.75_TEMP > 1,8G 2.78_TEMP > 1,4G 3.58_head > 1,4G 3.43_head > > Does anyone know what causes this problem and how it can be fixed? Is > this maybe related to cache tiering? I am reluctant to just delete the > _TEMP data, because I don't know if it is still needed. Judging from the > directory structure of one of these _TEMP folders it certainly looks > like something is broken: > > # tree 2.71_TEMP/ | head > 2.71_TEMP/ > └── DIR_0 > └── DIR_0 > └── DIR_0 > └── DIR_0 > └── DIR_0 > └── DIR_0 > └── DIR_0 > └── DIR_0 > ├── > temp\\u2.71\\u0\\u2583878\\u100003__head_00000000__none > > > Best Regards > Jan _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com