http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-October/013646.html On Tue, Dec 26, 2017 at 6:07 AM, Cary <dynamic.cary@xxxxxxxxx> wrote: > Are you using hardlinks in cephfs? > > > On Tue, Dec 26, 2017 at 3:42 AM, 周 威 <choury@xxxxxx> wrote: >> The out put of ceph osd df >> >> >> >> ID WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS >> >> 0 1.62650 1.00000 1665G 1279G 386G 76.82 1.05 343 >> >> 1 1.62650 1.00000 1665G 1148G 516G 68.97 0.94 336 >> >> 2 1.62650 1.00000 1665G 1253G 411G 75.27 1.03 325 >> >> 3 1.62650 1.00000 1665G 1192G 472G 71.60 0.98 325 >> >> 4 1.62650 1.00000 1665G 1205G 460G 72.35 0.99 341 >> >> 5 1.62650 1.00000 1665G 1381G 283G 82.95 1.13 364 >> >> 6 1.62650 1.00000 1665G 1069G 595G 64.22 0.88 322 >> >> 7 1.62650 1.00000 1665G 1222G 443G 73.38 1.00 337 >> >> 8 1.62650 1.00000 1665G 1120G 544G 67.29 0.92 312 >> >> 9 1.62650 1.00000 1665G 1166G 498G 70.04 0.96 336 >> >> 10 1.62650 1.00000 1665G 1254G 411G 75.31 1.03 348 >> >> 11 1.62650 1.00000 1665G 1352G 313G 81.19 1.11 341 >> >> 12 1.62650 1.00000 1665G 1174G 490G 70.52 0.96 328 >> >> 13 1.62650 1.00000 1665G 1281G 383G 76.95 1.05 345 >> >> 14 1.62650 1.00000 1665G 1147G 518G 68.88 0.94 339 >> >> 15 1.62650 1.00000 1665G 1236G 429G 74.24 1.01 334 >> >> 20 1.62650 1.00000 1665G 1166G 499G 70.03 0.96 325 >> >> 21 1.62650 1.00000 1665G 1371G 293G 82.35 1.13 377 >> >> 22 1.62650 1.00000 1665G 1110G 555G 66.67 0.91 341 >> >> 23 1.62650 1.00000 1665G 1221G 443G 73.36 1.00 327 >> >> 16 1.62650 1.00000 1665G 1354G 310G 81.34 1.11 352 >> >> 17 1.62650 1.00000 1665G 1250G 415G 75.06 1.03 341 >> >> 18 1.62650 1.00000 1665G 1179G 486G 70.80 0.97 316 >> >> 19 1.62650 1.00000 1665G 1236G 428G 74.26 1.01 333 >> >> 24 1.62650 1.00000 1665G 1146G 518G 68.86 0.94 325 >> >> 25 1.62650 1.00000 1665G 1033G 632G 62.02 0.85 309 >> >> 26 1.62650 1.00000 1665G 1234G 431G 74.11 1.01 334 >> >> 27 1.62650 1.00000 1665G 1342G 322G 80.62 1.10 352 >> >> TOTAL 46635G 34135G 12500G 73.20 >> >> MIN/MAX VAR: 0.85/1.13 STDDEV: 5.28 >> >> >> >> 发件人: Cary [mailto:dynamic.cary@xxxxxxxxx] >> 发送时间: 2017年12月26日 11:40 >> 收件人: ? ? <choury@xxxxxx> >> 抄送: ceph-users@xxxxxxxxxxxxxx >> 主题: Re: Can't delete file in cephfs with "No space left on >> device" >> >> >> >> Could you post the output of “ceph osd df”? >> >> >> On Dec 25, 2017, at 19:46, ? ? <choury@xxxxxx> wrote: >> >> Hi all: >> >> >> >> Ceph version: >> ceph version 10.2.9 (2ee413f77150c0f375ff6f10edd6c8f9c7d060d0) >> >> >> >> Ceph df: >> >> GLOBAL: >> >> SIZE AVAIL RAW USED %RAW USED >> >> 46635G 12500G 34135G 73.19 >> >> >> >> rm ddddd >> >> rm: cannot remove `ddddd': No space left on device >> >> >> >> and mds_cache: >> >> { >> >> "mds_cache": { >> >> "num_strays": 999713, >> >> "num_strays_purging": 0, >> >> "num_strays_delayed": 0, >> >> "num_purge_ops": 0, >> >> "strays_created": 999723, >> >> "strays_purged": 10, >> >> "strays_reintegrated": 0, >> >> "strays_migrated": 0, >> >> "num_recovering_processing": 0, >> >> "num_recovering_enqueued": 0, >> >> "num_recovering_prioritized": 0, >> >> "recovery_started": 107, >> >> "recovery_completed": 107 >> >> } >> >> } >> >> >> >> It seems starys num are stuck, what should I do? >> >> Thanks all. >> >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com