Hi all,
just a quick writeup. Over the last two days I was able to evict a lot
of those 0-byte files by setting "target_max_objects" to 2 millions.
After we hit that limit I set it to 10 millions for now. So
target_dirty_ratio of 0.6 would mean evicting should start at around 6
million objects. target_full_ratio is set to 0.9, so overall no more
than 9 million objects should exist in the cache. Remember we started at
109 million total and 24 million dirty.
Now I still have quite some 0-bytes left over in our cache pool (see
listing at the end), but we'll see how they develop over the next days.
Having set the limit so low, we evicted nearly the whole cache (from 9
TB total storage space only 800 GB remained). Luckily the difference
from the original question is now down to around 50 GB (quite some
savings from 860 GB which we started ;) )
ceph df detail now lists 2.3 million objects and 1.7 million dirty.
Thanks a lot Christian and Burkhard for all the help and clarifications
and your informations have been preserved in a blog post (see other post
to this mailing list).
Greetings
-Sascha-
File count (total and 0-bytes per OSD):
OSD-20 total: 315998
OSD-20 0-bytes: 301835
OSD-21 total: 224645
OSD-21 0-bytes: 212026
OSD-22 total: 208189
OSD-22 0-bytes: 196139
OSD-23 total: 357256
OSD-23 0-bytes: 342350
OSD-24 total: 232800
OSD-24 0-bytes: 220466
OSD-25 total: 235298
OSD-25 0-bytes: 222985
OSD-26 total: 236957
OSD-26 0-bytes: 224345
OSD-27 total: 265974
OSD-27 0-bytes: 252538
OSD-28 total: 253577
OSD-28 0-bytes: 241265
OSD-29 total: 255774
OSD-29 0-bytes: 242891
OSD-30 total: 209818
OSD-30 0-bytes: 198581
OSD-31 total: 276357
OSD-31 0-bytes: 262294
OSD-32 total: 239600
OSD-32 0-bytes: 226639
OSD-33 total: 245248
OSD-33 0-bytes: 232712
OSD-34 total: 267156
OSD-34 0-bytes: 253815
OSD-35 total: 250241
OSD-35 0-bytes: 237709
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com