Re: cephfs : write error: Operation not permitted

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Frank,

for some reason, the command "ceph fs authorize" does not add the required permissions for a FS with data pools any more, older versions did. Now you need to add these caps by hand. It needs to look something like this:

caps osd = "allow rw tag cephfs pool=cephfs_data, allow rw pool=cephfs-data"

an easy way is:

- ceph auth export
- add the caps with an editor
- ceph auth import

I consider this a bug and thought it was fixed in newer versions already.
>
Sorry, I had a typo. If you have separate meta- and data pools, the data pool is not added properly. The caps should look like:

caps osd = "allow rw tag cephfs pool=cephfs-meta-pool, allow rw pool=cephfs-data-pool"

If you don't have a separate data pool, it should work out of the box.

Thanks, for the feed back, I also got that information from dirtwash on the IRC channel that fix my issue. And yes for me it's also an issue.

But I have another cluster also on ubuntu 18.04 (4.15.0-74-generic) and Nautilus 14.2.6 installed one week before and it works. I did the same command on both and I don't have the same behaviour.

there are few diff between the 2 clusters, not the same hw config (No SSD on the dslab2020 cluster) and cephfs_data on an 8+3 EC pool on Artemis (see the end of artemis.txt). in attachement, I put the result of the commands I did on both cluster without the same behaviors at the end.

Best,

Yoann

________________________________________
From: Yoann Moulin <yoann.moulin@xxxxxxx>
Sent: 22 January 2020 08:58:29
To: ceph-users
Subject:  cephfs : write error: Operation not permitted

Hello,

On a fresh install (Nautilus 14.2.6) deploy with ceph-ansible playbook stable-4.0, I have an issue with cephfs. I can create a folder, I can
create empty files, but cannot write data on like I'm not allowed to write to the cephfs_data pool.

$ ceph -s
   cluster:
     id:     fded5bb5-62c5-4a88-b62c-0986d7c7ac09
     health: HEALTH_OK

   services:
     mon: 3 daemons, quorum iccluster039,iccluster041,iccluster042 (age 23h)
     mgr: iccluster039(active, since 21h), standbys: iccluster041, iccluster042
     mds: cephfs:3 {0=iccluster043=up:active,1=iccluster041=up:active,2=iccluster042=up:active}
     osd: 24 osds: 24 up (since 22h), 24 in (since 22h)
     rgw: 1 daemon active (iccluster043.rgw0)

   data:
     pools:   9 pools, 568 pgs
     objects: 800 objects, 225 KiB
     usage:   24 GiB used, 87 TiB / 87 TiB avail
     pgs:     568 active+clean

The 2 cephfs pools:

$ ceph osd pool ls detail | grep cephfs
pool 1 'cephfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 256 pgp_num 256 autoscale_mode warn last_change 83 lfor 0/0/81 flags hashpspool stripe_width 0 expected_num_objects 1 application cephfs
pool 2 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 48 flags hashpspool stripe_width 0 expected_num_objects 1 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs

The status of the cephfs filesystem:

$ ceph fs status
cephfs - 1 clients
======
+------+--------+--------------+---------------+-------+-------+
| Rank | State  |     MDS      |    Activity   |  dns  |  inos |
+------+--------+--------------+---------------+-------+-------+
|  0   | active | iccluster043 | Reqs:    0 /s |   34  |   18  |
|  1   | active | iccluster041 | Reqs:    0 /s |   12  |   16  |
|  2   | active | iccluster042 | Reqs:    0 /s |   10  |   13  |
+------+--------+--------------+---------------+-------+-------+
+-----------------+----------+-------+-------+
|       Pool      |   type   |  used | avail |
+-----------------+----------+-------+-------+
| cephfs_metadata | metadata | 4608k | 27.6T |
|   cephfs_data   |   data   |    0  | 27.6T |
+-----------------+----------+-------+-------+
+-------------+
| Standby MDS |
+-------------+
+-------------+
MDS version: ceph version 14.2.6 (f0aa067ac7a02ee46ea48aa26c6e298b5ea272e9) nautilus (stable)


# mkdir folder
# echo "foo" > bar
-bash: echo: write error: Operation not permitted
# ls -al
total 4
drwxrwxrwx  1 root root    2 Jan 22 07:30 .
drwxr-xr-x 28 root root 4096 Jan 21 09:25 ..
-rw-r--r--  1 root root    0 Jan 22 07:30 bar
drwxrwxrwx  1 root root    1 Jan 21 16:49 folder

# df -hT .
Filesystem                                     Type  Size  Used Avail Use% Mounted on
10.90.38.15,10.90.38.17,10.90.38.18:/dslab2020 ceph   28T     0   28T   0% /cephfs

I try 2 client config :

$ ceph --cluster dslab2020 fs authorize cephfs client.cephfsadmin / rw
[snip]
$ ceph auth  get client.fsadmin
exported keyring for client.fsadmin
[client.fsadmin]
       key = [snip]
       caps mds = "allow rw"
       caps mon = "allow r"
       caps osd = "allow rw tag cephfs data=cephfs"

$ ceph --cluster dslab2020 fs authorize cephfs client.cephfsadmin / rw
[snip]
$ ceph auth caps client.cephfsadmin mds "allow rw" mon "allow r" osd "allow rw tag cephfs pool=cephfs_data "
[snip]
ceph auth caps client.cephfsadmin mds "allow rw" mon "allow r" osd "allow rw tag cephfs pool=cephfs_data "> updated caps for client.cephfsadmin
$ ceph auth  get client.cephfsadmin
exported keyring for client.cephfsadmin
[client.cephfsadmin]
       key = [snip]
       caps mds = "allow rw"
       caps mon = "allow r"
       caps osd = "allow rw tag cephfs pool=cephfs_data "

I don't where to look to get more information about that issue. Anyone can help me? Thanks

Best regards,

--
Yoann Moulin
EPFL IC-IT
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



--
Yoann Moulin
EPFL IC-IT
dslab2020@icitsrv5:~$ ceph -s
  cluster:
    id:     fded5bb5-62c5-4a88-b62c-0986d7c7ac09
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum iccluster039,iccluster041,iccluster042 (age 46h)
    mgr: iccluster039(active, since 45h), standbys: iccluster041, iccluster042
    mds: cephfs:3 {0=iccluster043=up:active,1=iccluster041=up:active,2=iccluster042=up:active}
    osd: 24 osds: 24 up (since 46h), 24 in (since 46h)
    rgw: 1 daemon active (iccluster043.rgw0)
 
  data:
    pools:   9 pools, 568 pgs
    objects: 800 objects, 431 KiB
    usage:   24 GiB used, 87 TiB / 87 TiB avail
    pgs:     568 active+clean


dslab2020@icitsrv5:~$ ceph fs status
cephfs - 3 clients
======
+------+--------+--------------+---------------+-------+-------+
| Rank | State  |     MDS      |    Activity   |  dns  |  inos |
+------+--------+--------------+---------------+-------+-------+
|  0   | active | iccluster043 | Reqs:    0 /s |   44  |   22  |
|  1   | active | iccluster041 | Reqs:    0 /s |   12  |   16  |
|  2   | active | iccluster042 | Reqs:    0 /s |   11  |   14  |
+------+--------+--------------+---------------+-------+-------+
+-----------------+----------+-------+-------+
|       Pool      |   type   |  used | avail |
+-----------------+----------+-------+-------+
| cephfs_metadata | metadata | 5193k | 27.6T |
|   cephfs_data   |   data   |    0  | 27.6T |
+-----------------+----------+-------+-------+
+-------------+
| Standby MDS |
+-------------+
+-------------+
MDS version: ceph version 14.2.6 (f0aa067ac7a02ee46ea48aa26c6e298b5ea272e9) nautilus (stable)
dslab2020@icitsrv5:~$ ceph osd pool ls detail
pool 1 'cephfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 256 pgp_num 256 autoscale_mode warn last_change 83 lfor 0/0/81 flags hashpspool stripe_width 0 expected_num_objects 1 application cephfs
pool 2 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 48 flags hashpspool stripe_width 0 expected_num_objects 1 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs
pool 3 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 51 flags hashpspool stripe_width 0 application rgw
pool 4 'defaults.rgw.buckets.data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 256 pgp_num 256 autoscale_mode warn last_change 78 lfor 0/0/76 flags hashpspool stripe_width 0 application rgw
pool 5 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 53 flags hashpspool stripe_width 0 application rgw
pool 6 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 55 flags hashpspool stripe_width 0 application rgw
pool 7 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 57 flags hashpspool stripe_width 0 application rgw
pool 8 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 87 flags hashpspool stripe_width 0 application rgw
pool 9 'default.rgw.buckets.data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 90 flags hashpspool stripe_width 0 application rgw

dslab2020@icitsrv5:~$ ceph fs authorize cephfs client.test /test rw
[client.test]
	key = XXX
dslab2020@icitsrv5:~$ ceph auth get client.test
exported keyring for client.test
[client.test]
	key = XXX
	caps mds = "allow rw path=/test"
	caps mon = "allow r"
	caps osd = "allow rw tag cephfs data=cephfs"
root@icitsrv5:~# ceph --cluster dslab2020 auth get-key client.test > /etc/ceph/dslab2020.client.test.secret
root@icitsrv5:~# mkdir -p /mnt/dslab2020/test ; mount -t ceph -o rw,relatime,name=test,secretfile=/etc/ceph/dslab2020.client.test.secret  iccluster039.iccluster.epfl.ch,iccluster041.iccluster.epfl.ch,iccluster042.iccluster.epfl.ch:/test /mnt/dslab2020/test
root@icitsrv5:~# ls -al /mnt/dslab2020/test
total 4
drwxr-xr-x 1 root root    1 Jan 23 07:21 .
drwxr-xr-x 3 root root 4096 Jan 23 07:14 ..
root@icitsrv5:~# echo "test" > /mnt/dslab2020/test/foo
-bash: echo: write error: Operation not permitted
root@icitsrv5:~# ls -al /mnt/dslab2020/test
total 4
drwxr-xr-x 1 root root    1 Jan 23 07:21 .
drwxr-xr-x 3 root root 4096 Jan 23 07:14 ..
-rw-r--r-- 1 root root    0 Jan 23 07:21 foo
artemis@icitsrv5:~$ ceph -s
  cluster:
    id:     815ea021-7839-4a63-9dc1-14f8c5feecc6
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum iccluster003,iccluster005,iccluster007 (age 6d)
    mgr: iccluster021(active, since 5d), standbys: iccluster023
    mds: cephfs:1 {0=iccluster013=up:active} 2 up:standby
    osd: 80 osds: 80 up (since 6d), 80 in (since 6d); 68 remapped pgs
    rgw: 8 daemons active (iccluster003.rgw0, iccluster005.rgw0, iccluster007.rgw0, iccluster013.rgw0, iccluster015.rgw0, iccluster019.rgw0, iccluster021.rgw0, iccluster023.rgw0)
 
  data:
    pools:   9 pools, 1592 pgs
    objects: 41.82M objects, 103 TiB
    usage:   149 TiB used, 292 TiB / 442 TiB avail
    pgs:     22951249/457247397 objects misplaced (5.019%)
             1524 active+clean
             55   active+remapped+backfill_wait
             13   active+remapped+backfilling
 
  io:
    client:   0 B/s rd, 7.6 MiB/s wr, 0 op/s rd, 201 op/s wr
    recovery: 340 MiB/s, 121 objects/s
 
artemis@icitsrv5:~$ ceph fs status
cephfs - 4 clients
======
+------+--------+--------------+---------------+-------+-------+
| Rank | State  |     MDS      |    Activity   |  dns  |  inos |
+------+--------+--------------+---------------+-------+-------+
|  0   | active | iccluster013 | Reqs:   10 /s |  346k |  337k |
+------+--------+--------------+---------------+-------+-------+
+-----------------+----------+-------+-------+
|       Pool      |   type   |  used | avail |
+-----------------+----------+-------+-------+
| cephfs_metadata | metadata |  751M | 81.1T |
|   cephfs_data   |   data   | 14.2T |  176T |
+-----------------+----------+-------+-------+
+--------------+
| Standby MDS  |
+--------------+
| iccluster019 |
| iccluster015 |
+--------------+
MDS version: ceph version 14.2.6 (f0aa067ac7a02ee46ea48aa26c6e298b5ea272e9) nautilus (stable)
artemis@icitsrv5:~$ ceph osd pool ls detail
pool 3 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 125 flags hashpspool stripe_width 0 application rgw
pool 4 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 128 flags hashpspool stripe_width 0 application rgw
pool 5 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 130 flags hashpspool stripe_width 0 application rgw
pool 6 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 131 flags hashpspool stripe_width 0 application rgw
pool 7 'cephfs_data' erasure size 11 min_size 9 crush_rule 1 object_hash rjenkins pg_num 512 pgp_num 512 autoscale_mode warn last_change 204 lfor 0/0/199 flags hashpspool,ec_overwrites stripe_width 32768 application cephfs
pool 8 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 144 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs
pool 9 'default.rgw.buckets.data' erasure size 11 min_size 9 crush_rule 2 object_hash rjenkins pg_num 1024 pgp_num 808 pgp_num_target 1024 autoscale_mode warn last_change 2982 lfor 0/0/180 flags hashpspool stripe_width 32768 application rgw
pool 10 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 171 flags hashpspool stripe_width 0 application rgw
pool 11 'default.rgw.buckets.non-ec' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 176 flags hashpspool stripe_width 0 application rgw

artemis@icitsrv5:~$ ceph fs authorize cephfs client.test /test rw
[client.test]
	key = XXX 
artemis@icitsrv5:~$ ceph auth get client.test
exported keyring for client.test
[client.test]
	key = XXX
	caps mds = "allow rw path=/test"
	caps mon = "allow r"
	caps osd = "allow rw tag cephfs data=cephfs"
root@icitsrv5:~# ceph --cluster artemis auth get-key client.test > /etc/ceph/artemis.client.test.secret
root@icitsrv5:~# mkdir -p /mnt/artemis/test/ ; mount -t ceph -o rw,relatime,name=test,secretfile=/etc/ceph/artemis.client.test.secret iccluster003.iccluster.epfl.ch,iccluster005.iccluster.epfl.ch,iccluster007.iccluster.epfl.ch:/test /mnt/artemis/test/
root@icitsrv5:~# ls -la /mnt/artemis/test/
total 5
drwxr-xr-x 1 root root    1 Jan 23 07:21 .
drwxr-xr-x 3 root root 4096 Jan 23 07:15 ..
root@icitsrv5:~# echo "test" > /mnt/artemis/test/foo 
root@icitsrv5:~# ls -la /mnt/artemis/test/
total 5
drwxr-xr-x 1 root root    1 Jan 23 07:21 .
drwxr-xr-x 3 root root 4096 Jan 23 07:15 ..
-rw-r--r-- 1 root root    5 Jan 23 07:21 foo


# What I did to have EC pool on the cephfs_data pool : 

# must stop mds servers
ansible -i ~/iccluster/ceph-config/cluster-artemis/inventory mdss -m shell -a " systemctl stop ceph-mds.target"

# must allow pool deletion
ceph --cluster artemis tell mon.\* injectargs '--mon-allow-pool-delete=true'

ceph --cluster artemis fs rm cephfs --yes-i-really-mean-it

#delete cephfs pool
ceph --cluster artemis osd pool rm cephfs_data  cephfs_data --yes-i-really-really-mean-it --yes-i-really-really-mean-it
ceph --cluster artemis osd pool rm cephfs_metadata  cephfs_metadata --yes-i-really-really-mean-it --yes-i-really-really-mean-it

# disallow pool deletion
ceph --cluster artemis tell mon.\* injectargs '--mon-allow-pool-delete=false'

# create earsure coding profile
# ecpool-8-3 for apollo cluster
ceph --cluster artemis osd erasure-code-profile set ecpool-8-3 k=8 m=3 crush-failure-domain=host

#re create pool for cephfs
# cephfs_data in erasure coding
ceph --cluster artemis osd pool create cephfs_data 64 64 erasure ecpool-8-3
# cephfs_metadata must be in replicated !
ceph --cluster artemis osd pool create cephfs_metadata 8 8

# must set allow_ec_overwrites to be able to create a cephfs over EC pool
ceph --cluster artemis osd pool set cephfs_data allow_ec_overwrites true

# create the cephfs filesystem named "cephfs"
ceph --cluster artemis fs new cephfs cephfs_metadata cephfs_data

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux