Re: ceph directory not accessible

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Yan,

           Sorry for late reply , it is kernel client and ceph version 10.2.3. Its not reproducible in other mounts. 

Regards
Prabu GJ


---- On Thu, 14 Dec 2017 12:18:52 +0530 Yan, Zheng <ukernel@xxxxxxxxx> wrote ----

On Thu, Dec 14, 2017 at 2:14 PM, gjprabu <gjprabu@xxxxxxxxxxxx> wrote:
>
>
> Hi Team,
>
> Today we found one of the client data were not accessible it
> shown "d????????? ? ? ? ? ? backups" like this.
> Anybody faced same and any solution for this.
>
>
> [root@ /]# cd /data/build/repository/rep/lab
> [root@integ-hm11 gitlab]# ls -althr
> ls: cannot access backups: Device or resource busy

looks like ls got err -EBUSY.

kernel client or ceph-fuse? which version?. Is this reproducible on
other mounts?


> total 185G
> d????????? ? ? ? ? ? backups
> drwx------ 1 build build 1.8G Nov 22 2016 uploads.1480129558
> drwx------ 1 build build 0 Nov 24 2016 uploads.1480012217
> -rw-r--r-- 1 build build 128 Nov 26 2016 .secret
>
>
>
> ceph -w
> cluster 225f1d6f-ed13-41ea-8b7a-f048c652f7bb
> health HEALTH_WARN
> mds0: Client integ-cm1 failing to respond to cache pressure
> mds0: Client cmsuite-bkp failing to respond to cache pressure
> mds0: Client integ-git failing to respond to cache pressure
> mds0: Client integ-cm-new failing to respond to cache pressure
> mds0: Client integ-git1 failing to respond to cache pressure
> monmap e1: 3 mons at
> {integ-hm10=192.168.112.231:6789/0,integ-hm6=192.168.112.193:6789/0,integ-hm7=192.168.112.194:6789/0}
> election epoch 6, quorum 0,1,2 integ-hm6,integ-hm7,integ-hm10
> fsmap e167257: 1/1/1 up {0=integ-hm7=up:active}, 1 up:standby
> osdmap e23: 3 osds: 3 up, 3 in
> flags sortbitwise
> pgmap v15340893: 364 pgs, 3 pools, 1314 GB data, 11611 kobjects
> 2787 GB used, 2575 GB / 5362 GB avail
> 363 active+clean
> 1 active+clean+scrubbing+deep
> client io 13404 kB/s rd, 284 kB/s wr, 280 op/s rd, 35 op/s wr
>
>
> ceph osd df
> ID WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS
> 0 1.74570 1.00000 1787G 828G 959G 46.32 0.89 224
> 1 1.74570 1.00000 1787G 952G 834G 53.30 1.03 243
> 2 1.74570 1.00000 1787G 1006G 781G 56.30 1.08 261
> TOTAL 5362G 2787G 2575G 51.98
> MIN/MAX VAR: 0.89/1.08 STDDEV: 4.18
>
>
>
> Regards
> Prabu GJ
>
>
>
> _______________________________________________
> ceph-users mailing list
>


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux