Re: cephfs cannot write

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm not sure if the ceph-volume error is related to the "operation not permitted" error. Have you checked the auth settings for your cephfs client? Or did you mount it as admin user?


Zitat von Patrick <quith@xxxxxx>:

Hi all,


My ceph cluster is HEALTH_OK, but I cannot write on cephfs.
OS: Ubuntu 20.04, ceph version 15.2.5, deploy with cephadm.


root@RK01-OSD-A001:~# ceph -s
&nbsp; cluster:
&nbsp; &nbsp; id:&nbsp; &nbsp; &nbsp;9091b472-1bdb-11eb-b217-abff3468259e
&nbsp; &nbsp; health: HEALTH_OK
&nbsp;
&nbsp; services:
&nbsp; &nbsp; mon: 3 daemons, quorum RK01-OSD-A001,RK02-OSD-A002,RK03-OSD-A003 (age 18s) &nbsp; &nbsp; mgr: RK01-OSD-A001.jwrjgj(active, since 51m), standbys: RK03-OSD-A003.tulrii &nbsp; &nbsp; mds: cephfs:1 {0=cephfs.RK02-OSD-A002.lwpgaw=up:active} 1 up:standby
&nbsp; &nbsp; osd: 6 osds: 6 up (since 44m), 6 in (since 44m)
&nbsp;
&nbsp; task status:
&nbsp; &nbsp; scrub status:
&nbsp; &nbsp; &nbsp; &nbsp; mds.cephfs.RK02-OSD-A002.lwpgaw: idle
&nbsp;
&nbsp; data:
&nbsp; &nbsp; pools:&nbsp; &nbsp;3 pools, 65 pgs
&nbsp; &nbsp; objects: 24 objects, 67 KiB
&nbsp; &nbsp; usage:&nbsp; &nbsp;6.0 GiB used, 44 TiB / 44 TiB avail
&nbsp; &nbsp; pgs:&nbsp; &nbsp; &nbsp;65 active+clean


root@RK01-OSD-A001:~# ceph fs status
cephfs - 1 clients
======
RANK&nbsp; STATE&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;MDS&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;ACTIVITY&nbsp; &nbsp; &nbsp;DNS&nbsp; &nbsp; INOS&nbsp;&nbsp; &nbsp;0&nbsp; &nbsp; active&nbsp; cephfs.RK02-OSD-A002.lwpgaw&nbsp; Reqs:&nbsp; &nbsp; 0 /s&nbsp; &nbsp; 13&nbsp; &nbsp; &nbsp;15&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;POOL&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;TYPE&nbsp; &nbsp; &nbsp;USED&nbsp; AVAIL&nbsp;&nbsp;
cephfs.cephfs.meta&nbsp; metadata&nbsp; 1152k&nbsp; 20.7T&nbsp;&nbsp;
cephfs.cephfs.data&nbsp; &nbsp; data&nbsp; &nbsp; &nbsp; &nbsp;0&nbsp; &nbsp;20.7T&nbsp;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; STANDBY MDS&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;
cephfs.RK03-OSD-A003.xchwqj&nbsp;&nbsp;
MDS version: ceph version 15.2.5 (2c93eff00150f0cc5f106a559557a58d3d7b6f1f) octopus (stable)



root@RK05-FRP-A001:~# df -h|grep "ceph-test"
172.16.65.1,172.16.65.2,172.16.65.3:6789:/&nbsp; &nbsp;21T&nbsp; &nbsp; &nbsp;0&nbsp; &nbsp;21T&nbsp; &nbsp;0% /ceph-test
root@RK05-FRP-A001:~# echo 123 &gt; /ceph-test/1.txt
-bash: echo: write error: Operation not permitted
root@RK05-FRP-A001:~# ls -l /ceph-test/1.txt
-rw-r--r-- 1 root root 0 Nov&nbsp; 1 09:40 /ceph-test/1.txt
root@RK05-FRP-A001:~# ls -ld /ceph-test/
drwxr-xr-x 2 root root 1 Nov&nbsp; 1 09:40 /ceph-test/


root@RK01-OSD-A001:~# cd /var/log/ceph/`ceph fsid`
root@RK01-OSD-A001:/var/log/ceph/9091b472-1bdb-11eb-b217-abff3468259e# cat ceph-volume.log | grep err | grep sdx [2020-11-01 08:53:51,384][ceph_volume.process][INFO&nbsp; ] stderr Failed to find physical volume "/dev/sdx". [2020-11-01 08:53:51,417][ceph_volume.process][INFO&nbsp; ] stderr unable to read label for /dev/sdx: (2) No such file or directory [2020-11-01 08:53:51,445][ceph_volume.process][INFO&nbsp; ] stderr unable to read label for /dev/sdx: (2) No such file or directory


root@RK01-OSD-A001:~# pvs|grep sdx
&nbsp; /dev/sdx&nbsp; &nbsp;ceph-41b09a52-e44b-43c5-ad86-0eada11b48b6 lvm2 a--&nbsp; <7.28t&nbsp; &nbsp; 0&nbsp;
root@RK01-OSD-A001:~# lsblk|grep sdx
sdx&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;65:112&nbsp; 0&nbsp; &nbsp;7.3T&nbsp; 0 disk&nbsp;
root@RK01-OSD-A001:~# parted -s /dev/sdx print
Error: /dev/sdx: unrecognised disk label
Model: LSI MR9261-8i (scsi)
Disk /dev/sdx: 8001GB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
Disk Flags:&nbsp;
root@RK01-OSD-A001:~#&nbsp;


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux