Le 23.01.20 à 12:50, Frank Schilder a écrit :
You should probably enable the application "cephfs" on the fs-pools.
it's already activated :
pool 1 'cephfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 256 pgp_num 256 autoscale_mode warn last_change 83
lfor 0/0/81 flags hashpspool stripe_width 0 expected_num_objects 1 application cephfs
In both your cases, the osd caps should read
caps osd = "allow rw tag cephfs data=cephfs_metadata, allow rw pool=cephfs_data"
Currently, I have this and it works:
caps: [osd] allow class-read object_prefix rbd_children, allow rw pool=cephfs_data
I opened a bug on tracker : https://tracker.ceph.com/issues/43761
This is independent of the replication type of cephfs_data.
Yup, this is what I understood.
Yoann
________________________________________
From: Yoann Moulin <yoann.moulin@xxxxxxx>
Sent: 23 January 2020 10:38:42
To: Frank Schilder; ceph-users
Subject: Re: Re: cephfs : write error: Operation not permitted
Hi Frank,
for some reason, the command "ceph fs authorize" does not add the required permissions for a FS with data pools any more, older versions did. Now you need to add these caps by hand. It needs to look something like this:
caps osd = "allow rw tag cephfs pool=cephfs_data, allow rw pool=cephfs-data"
an easy way is:
- ceph auth export
- add the caps with an editor
- ceph auth import
I consider this a bug and thought it was fixed in newer versions already.
>
Sorry, I had a typo. If you have separate meta- and data pools, the data pool is not added properly. The caps should look like:
caps osd = "allow rw tag cephfs pool=cephfs-meta-pool, allow rw pool=cephfs-data-pool"
If you don't have a separate data pool, it should work out of the box.
Thanks, for the feed back, I also got that information from dirtwash on the IRC channel that fix my issue. And yes for me it's also an issue.
But I have another cluster also on ubuntu 18.04 (4.15.0-74-generic) and Nautilus 14.2.6 installed one week before and it works. I did the same
command on both and I don't have the same behaviour.
there are few diff between the 2 clusters, not the same hw config (No SSD on the dslab2020 cluster) and cephfs_data on an 8+3 EC pool on Artemis
(see the end of artemis.txt). in attachement, I put the result of the commands I did on both cluster without the same behaviors at the end.
Best,
Yoann
________________________________________
From: Yoann Moulin <yoann.moulin@xxxxxxx>
Sent: 22 January 2020 08:58:29
To: ceph-users
Subject: cephfs : write error: Operation not permitted
Hello,
On a fresh install (Nautilus 14.2.6) deploy with ceph-ansible playbook stable-4.0, I have an issue with cephfs. I can create a folder, I can
create empty files, but cannot write data on like I'm not allowed to write to the cephfs_data pool.
$ ceph -s
cluster:
id: fded5bb5-62c5-4a88-b62c-0986d7c7ac09
health: HEALTH_OK
services:
mon: 3 daemons, quorum iccluster039,iccluster041,iccluster042 (age 23h)
mgr: iccluster039(active, since 21h), standbys: iccluster041, iccluster042
mds: cephfs:3 {0=iccluster043=up:active,1=iccluster041=up:active,2=iccluster042=up:active}
osd: 24 osds: 24 up (since 22h), 24 in (since 22h)
rgw: 1 daemon active (iccluster043.rgw0)
data:
pools: 9 pools, 568 pgs
objects: 800 objects, 225 KiB
usage: 24 GiB used, 87 TiB / 87 TiB avail
pgs: 568 active+clean
The 2 cephfs pools:
$ ceph osd pool ls detail | grep cephfs
pool 1 'cephfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 256 pgp_num 256 autoscale_mode warn last_change 83 lfor 0/0/81 flags hashpspool stripe_width 0 expected_num_objects 1 application cephfs
pool 2 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change 48 flags hashpspool stripe_width 0 expected_num_objects 1 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs
The status of the cephfs filesystem:
$ ceph fs status
cephfs - 1 clients
======
+------+--------+--------------+---------------+-------+-------+
| Rank | State | MDS | Activity | dns | inos |
+------+--------+--------------+---------------+-------+-------+
| 0 | active | iccluster043 | Reqs: 0 /s | 34 | 18 |
| 1 | active | iccluster041 | Reqs: 0 /s | 12 | 16 |
| 2 | active | iccluster042 | Reqs: 0 /s | 10 | 13 |
+------+--------+--------------+---------------+-------+-------+
+-----------------+----------+-------+-------+
| Pool | type | used | avail |
+-----------------+----------+-------+-------+
| cephfs_metadata | metadata | 4608k | 27.6T |
| cephfs_data | data | 0 | 27.6T |
+-----------------+----------+-------+-------+
+-------------+
| Standby MDS |
+-------------+
+-------------+
MDS version: ceph version 14.2.6 (f0aa067ac7a02ee46ea48aa26c6e298b5ea272e9) nautilus (stable)
# mkdir folder
# echo "foo" > bar
-bash: echo: write error: Operation not permitted
# ls -al
total 4
drwxrwxrwx 1 root root 2 Jan 22 07:30 .
drwxr-xr-x 28 root root 4096 Jan 21 09:25 ..
-rw-r--r-- 1 root root 0 Jan 22 07:30 bar
drwxrwxrwx 1 root root 1 Jan 21 16:49 folder
# df -hT .
Filesystem Type Size Used Avail Use% Mounted on
10.90.38.15,10.90.38.17,10.90.38.18:/dslab2020 ceph 28T 0 28T 0% /cephfs
I try 2 client config :
$ ceph --cluster dslab2020 fs authorize cephfs client.cephfsadmin / rw
[snip]
$ ceph auth get client.fsadmin
exported keyring for client.fsadmin
[client.fsadmin]
key = [snip]
caps mds = "allow rw"
caps mon = "allow r"
caps osd = "allow rw tag cephfs data=cephfs"
$ ceph --cluster dslab2020 fs authorize cephfs client.cephfsadmin / rw
[snip]
$ ceph auth caps client.cephfsadmin mds "allow rw" mon "allow r" osd "allow rw tag cephfs pool=cephfs_data "
[snip]
ceph auth caps client.cephfsadmin mds "allow rw" mon "allow r" osd "allow rw tag cephfs pool=cephfs_data "> updated caps for client.cephfsadmin
$ ceph auth get client.cephfsadmin
exported keyring for client.cephfsadmin
[client.cephfsadmin]
key = [snip]
caps mds = "allow rw"
caps mon = "allow r"
caps osd = "allow rw tag cephfs pool=cephfs_data "
I don't where to look to get more information about that issue. Anyone can help me? Thanks
Best regards,
--
Yoann Moulin
EPFL IC-IT
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
--
Yoann Moulin
EPFL IC-IT
--
Yoann Moulin
EPFL IC-IT
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx