I am trying to set up a CephFS with a Cache Tier (for data) on a mini test cluster, but a kernel-mount CephFS client is unable to write. Cache tier setup alone seems to be working fine (I tested it with `rados put` and `osd map` commands to verify on which OSDs the objects are placed) and setting up CephFS without the cache-tiering also worked fine on the same cluster with the same client, but combining the two fails. Here is what I have tried:
Ceph version: 14.2.6
Set up Cache Tier:
$ ceph osd crush rule create-replicated highspeedpool default host ssd
$ ceph osd crush rule create-replicated highcapacitypool default host hdd
$ ceph osd pool create cephfs-data 256 256 highcapacitypool
$ ceph osd pool create cephfs-metadata 128 128 highspeedpool
$ ceph osd pool create cephfs-data-cache 256 256 highspeedpool
$ ceph osd tier add cephfs-data cephfs-data-cache
$ ceph osd tier cache-mode cephfs-data-cache writeback
$ ceph osd tier set-overlay cephfs-data cephfs-data-cache
$ ceph osd pool set cephfs-data-cache hit_set_type bloom
###
All the cache tier configs set (hit_set_count, hit_set period, target_max_bytes etc.)
###
$ ceph-deploy mds create <mds node>
$ ceph fs new cephfs_test cephfs-metadata cephfs-data
$ ceph fs authorize cephfs_test client.testuser / rw
$ ceph auth ls
client.testuser
key: XXXYYYYZZZZ
caps: [mds] allow rw
caps: [mon] allow r
caps: [osd] allow rw tag cephfs data="">
### Confirm the pool setting
$ ceph-deploy mds create <mds node>
$ ceph fs new cephfs_test cephfs-metadata cephfs-data
$ ceph fs authorize cephfs_test client.testuser / rw
$ ceph auth ls
client.testuser
key: XXXYYYYZZZZ
caps: [mds] allow rw
caps: [mon] allow r
caps: [osd] allow rw tag cephfs data="">
### Confirm the pool setting
$ ceph osd pool ls detail
pool 1 'cephfs-data' replicated size 3 min_size 2 crush_rule 2 object_hash rjenkins pg_num 256 pgp_num 256 autoscale_mode warn last_change 63 lfor 53/53/53 flags hashpspool tiers 3 read_tier 3 write_tier 3 stripe_width 0 application cephfs
pool 2 'cephfs-metadata' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode warn last_change 63 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs
pool 3 'cephfs-data-cache' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode warn last_change 63 lfor 53/53/53 flags hashpspool,incomplete_clones tier_of 1 cache_mode writeback target_bytes 80000000000 hit_set bloom{false_positive_probability: 0.05, target_size: 0, seed: 0} 120s x2 decay_rate 0 search_last_n 0 stripe_width 0
#### Set up the client side (kernel mount)
$ sudo vim /etc/ceph/fsclient_secret
$ sudo mkdir /mnt/cephfs
$ sudo mount -t ceph <ceph MDS address>:6789:/ /mnt/cephfs -o name=testuser,secretfile=/etc/ceph/fsclient_secret // no errors at this point
$ sudo vim /mnt/cephfs/file1 // Writing attempt fails
"file1" E514: write error (file system full?)
WARNING: Original file may be lost or damaged
don't quit the editor until the file is successfully written!
pool 1 'cephfs-data' replicated size 3 min_size 2 crush_rule 2 object_hash rjenkins pg_num 256 pgp_num 256 autoscale_mode warn last_change 63 lfor 53/53/53 flags hashpspool tiers 3 read_tier 3 write_tier 3 stripe_width 0 application cephfs
pool 2 'cephfs-metadata' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode warn last_change 63 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs
pool 3 'cephfs-data-cache' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode warn last_change 63 lfor 53/53/53 flags hashpspool,incomplete_clones tier_of 1 cache_mode writeback target_bytes 80000000000 hit_set bloom{false_positive_probability: 0.05, target_size: 0, seed: 0} 120s x2 decay_rate 0 search_last_n 0 stripe_width 0
#### Set up the client side (kernel mount)
$ sudo vim /etc/ceph/fsclient_secret
$ sudo mkdir /mnt/cephfs
$ sudo mount -t ceph <ceph MDS address>:6789:/ /mnt/cephfs -o name=testuser,secretfile=/etc/ceph/fsclient_secret // no errors at this point
$ sudo vim /mnt/cephfs/file1 // Writing attempt fails
"file1" E514: write error (file system full?)
WARNING: Original file may be lost or damaged
don't quit the editor until the file is successfully written!
$ ls -l /mnt/cephfs
total 0
-rw-r--r-- 1 root root 0 Jan 21 16:25 file1
total 0
-rw-r--r-- 1 root root 0 Jan 21 16:25 file1
Any help will be appreciated.
Mami Hayashida
Research Computing Associate
Univ. of Kentucky ITS Research Computing Infrastructure
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com