The client they are using is mainly fuse(10.2.9 and 0.94.9) -----邮件原件----- 发件人: Yan, Zheng [mailto:ukernel@xxxxxxxxx] 发送时间: 2017年12月27日 10:32 收件人: 周 威 <choury@xxxxxx> 抄送: Cary <dynamic.cary@xxxxxxxxx>; ceph-users@xxxxxxxxxxxxxx 主题: Re: 答复: 答复: Can't delete file in cephfs with "No space left on device" On Tue, Dec 26, 2017 at 2:28 PM, 周 威 <choury@xxxxxx> wrote: > We don't use hardlink. > I reduced the mds_cache_size from 10000000 to 2000000. > After that, the num_strays reduce to about 100k The cluster is normal > now. I think there is some bug about it. > Anyway, thanks for your reply! > This seems like a client bug. which client do you use (kclient or fuse, version)? > -----邮件原件----- > 发件人: Cary [mailto:dynamic.cary@xxxxxxxxx] > 发送时间: 2017年12月26日 14:08 > 收件人: 周 威 <choury@xxxxxx> > 抄送: ceph-users@xxxxxxxxxxxxxx > 主题: Re: 答复: Can't delete file in cephfs with "No space left on device" > > https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists > .ceph.com%2Fpipermail%2Fceph-users-ceph.com%2F2016-October%2F013646.ht > ml&data=02%7C01%7C%7C571893beb17d4d643a8e08d54c26fbed%7C84df9e7fe9f640 > afb435aaaaaaaaaaaa%7C1%7C0%7C636498652669125646&sdata=2Y9JT%2BoksSglve > XPttiR7y3nwAmADhLwxTUoH4lxQAI%3D&reserved=0 > > On Tue, Dec 26, 2017 at 6:07 AM, Cary <dynamic.cary@xxxxxxxxx> wrote: >> Are you using hardlinks in cephfs? >> >> >> On Tue, Dec 26, 2017 at 3:42 AM, 周 威 <choury@xxxxxx> wrote: >>> The out put of ceph osd df >>> >>> >>> >>> ID WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS >>> >>> 0 1.62650 1.00000 1665G 1279G 386G 76.82 1.05 343 >>> >>> 1 1.62650 1.00000 1665G 1148G 516G 68.97 0.94 336 >>> >>> 2 1.62650 1.00000 1665G 1253G 411G 75.27 1.03 325 >>> >>> 3 1.62650 1.00000 1665G 1192G 472G 71.60 0.98 325 >>> >>> 4 1.62650 1.00000 1665G 1205G 460G 72.35 0.99 341 >>> >>> 5 1.62650 1.00000 1665G 1381G 283G 82.95 1.13 364 >>> >>> 6 1.62650 1.00000 1665G 1069G 595G 64.22 0.88 322 >>> >>> 7 1.62650 1.00000 1665G 1222G 443G 73.38 1.00 337 >>> >>> 8 1.62650 1.00000 1665G 1120G 544G 67.29 0.92 312 >>> >>> 9 1.62650 1.00000 1665G 1166G 498G 70.04 0.96 336 >>> >>> 10 1.62650 1.00000 1665G 1254G 411G 75.31 1.03 348 >>> >>> 11 1.62650 1.00000 1665G 1352G 313G 81.19 1.11 341 >>> >>> 12 1.62650 1.00000 1665G 1174G 490G 70.52 0.96 328 >>> >>> 13 1.62650 1.00000 1665G 1281G 383G 76.95 1.05 345 >>> >>> 14 1.62650 1.00000 1665G 1147G 518G 68.88 0.94 339 >>> >>> 15 1.62650 1.00000 1665G 1236G 429G 74.24 1.01 334 >>> >>> 20 1.62650 1.00000 1665G 1166G 499G 70.03 0.96 325 >>> >>> 21 1.62650 1.00000 1665G 1371G 293G 82.35 1.13 377 >>> >>> 22 1.62650 1.00000 1665G 1110G 555G 66.67 0.91 341 >>> >>> 23 1.62650 1.00000 1665G 1221G 443G 73.36 1.00 327 >>> >>> 16 1.62650 1.00000 1665G 1354G 310G 81.34 1.11 352 >>> >>> 17 1.62650 1.00000 1665G 1250G 415G 75.06 1.03 341 >>> >>> 18 1.62650 1.00000 1665G 1179G 486G 70.80 0.97 316 >>> >>> 19 1.62650 1.00000 1665G 1236G 428G 74.26 1.01 333 >>> >>> 24 1.62650 1.00000 1665G 1146G 518G 68.86 0.94 325 >>> >>> 25 1.62650 1.00000 1665G 1033G 632G 62.02 0.85 309 >>> >>> 26 1.62650 1.00000 1665G 1234G 431G 74.11 1.01 334 >>> >>> 27 1.62650 1.00000 1665G 1342G 322G 80.62 1.10 352 >>> >>> TOTAL 46635G 34135G 12500G 73.20 >>> >>> MIN/MAX VAR: 0.85/1.13 STDDEV: 5.28 >>> >>> >>> >>> 发件人: Cary [mailto:dynamic.cary@xxxxxxxxx] >>> 发送时间: 2017年12月26日 11:40 >>> 收件人: ? ? <choury@xxxxxx> >>> 抄送: ceph-users@xxxxxxxxxxxxxx >>> 主题: Re: Can't delete file in cephfs with "No space left >>> on device" >>> >>> >>> >>> Could you post the output of “ceph osd df”? >>> >>> >>> On Dec 25, 2017, at 19:46, ? ? <choury@xxxxxx> wrote: >>> >>> Hi all: >>> >>> >>> >>> Ceph version: >>> ceph version 10.2.9 (2ee413f77150c0f375ff6f10edd6c8f9c7d060d0) >>> >>> >>> >>> Ceph df: >>> >>> GLOBAL: >>> >>> SIZE AVAIL RAW USED %RAW USED >>> >>> 46635G 12500G 34135G 73.19 >>> >>> >>> >>> rm ddddd >>> >>> rm: cannot remove `ddddd': No space left on device >>> >>> >>> >>> and mds_cache: >>> >>> { >>> >>> "mds_cache": { >>> >>> "num_strays": 999713, >>> >>> "num_strays_purging": 0, >>> >>> "num_strays_delayed": 0, >>> >>> "num_purge_ops": 0, >>> >>> "strays_created": 999723, >>> >>> "strays_purged": 10, >>> >>> "strays_reintegrated": 0, >>> >>> "strays_migrated": 0, >>> >>> "num_recovering_processing": 0, >>> >>> "num_recovering_enqueued": 0, >>> >>> "num_recovering_prioritized": 0, >>> >>> "recovery_started": 107, >>> >>> "recovery_completed": 107 >>> >>> } >>> >>> } >>> >>> >>> >>> It seems starys num are stuck, what should I do? >>> >>> Thanks all. >>> >>> _______________________________________________ >>> ceph-users mailing list >>> ceph-users@xxxxxxxxxxxxxx >>> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Flis >>> t >>> s.ceph.com%2Flistinfo.cgi%2Fceph-users-ceph.com&data=02%7C01%7C%7C57 >>> 1 >>> 893beb17d4d643a8e08d54c26fbed%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1 >>> % >>> 7C0%7C636498652669125646&sdata=%2BUv%2BbpXBv%2B4INeEyjNLML1e%2BEjPMV >>> H >>> k%2FD5iU8a7a1DA%3D&reserved=0 > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists > .ceph.com%2Flistinfo.cgi%2Fceph-users-ceph.com&data=02%7C01%7C%7C3ba77 > 14f44bb4a3bb53f08d54cd2126e%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0 > %7C636499387489005312&sdata=TR36%2F%2FzzEOkcGLY4RrAvm09KbK22k%2Bq2iUNZ > MFcHt3Y%3D&reserved=0 _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com