Re: CephFS with cache-tier kernel-mount client unable to write (Nautilus)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ilya,

Thank you for your suggestions! 

`dmsg` (on the client node) only had `libceph: mon0 10.33.70.222:6789 socket error on write`.  No further detail.  But using the admin key (client.admin) for mounting CephFS solved my problem.  I was able to write successfully! :-)

$ sudo mount -t ceph 10.33.70.222:6789:/  /mnt/cephfs -o name=admin,secretfile=/etc/ceph/fsclient_secret     // with the corresponding client.admin key

$ sudo vim /mnt/cephfs/file4
$ sudo ls -l /mnt/cephfs
total 1
-rw-r--r-- 1 root root  0 Jan 21 16:25 file1
-rw-r--r-- 1 root root  0 Jan 21 16:45 file2
-rw-r--r-- 1 root root  0 Jan 21 18:35 file3
-rw-r--r-- 1 root root 22 Jan 21 18:42 file4

Now, here is the difference between the two keys. client.testuser was obviously generated with the command `ceph fs authorize cephfs_test client.testuser / rw`, but something in there is obviously interfering with CephFS with a Cache Tier pool.  Do I need to edit the `tag` or the `data` part?  Now, I should mention the same type of key (like client.testuser) worked just fine when I was testing CephFS without a Cache Tier pool.

client.admin
key: XXXYYYYZZZ
caps: [mds] allow *
caps: [mgr] allow *
caps: [mon] allow *
caps: [osd] allow *

client.testuser
key: XXXYYYYZZZ
caps: [mds] allow rw
caps: [mon] allow r
caps: [osd] allow rw tag cephfs data="">


Mami Hayashida
Research Computing Associate
Univ. of Kentucky ITS Research Computing Infrastructure



On Tue, Jan 21, 2020 at 1:26 PM Ilya Dryomov <idryomov@xxxxxxxxx> wrote:
On Tue, Jan 21, 2020 at 6:02 PM Hayashida, Mami <mami.hayashida@xxxxxxx> wrote:
>
> I am trying to set up a CephFS with a Cache Tier (for data) on a mini test cluster, but a kernel-mount CephFS client is unable to write.  Cache tier setup alone seems to be working fine (I tested it with `rados put` and `osd map` commands to verify on which OSDs the objects are placed) and setting up CephFS without the cache-tiering also worked fine on the same cluster with the same client, but combining the two fails.  Here is what I have tried:
>
> Ceph version: 14.2.6
>
> Set up Cache Tier:
> $ ceph osd crush rule create-replicated highspeedpool default host ssd
> $ ceph osd crush rule create-replicated highcapacitypool default host hdd
>
> $ ceph osd pool create cephfs-data 256 256 highcapacitypool
> $ ceph osd pool create cephfs-metadata 128 128 highspeedpool
> $ ceph osd pool create cephfs-data-cache 256 256 highspeedpool
>
> $ ceph osd tier add cephfs-data cephfs-data-cache
> $ ceph osd tier cache-mode cephfs-data-cache writeback
> $ ceph osd tier set-overlay cephfs-data cephfs-data-cache
>
> $ ceph osd pool set cephfs-data-cache hit_set_type bloom
>
> ###
> All the cache tier configs set (hit_set_count, hit_set period, target_max_bytes etc.)
> ###
>
> $ ceph-deploy mds create <mds node>
> $ ceph fs new cephfs_test cephfs-metadata cephfs-data
>
> $ ceph fs authorize cephfs_test client.testuser / rw
> $ ceph auth ls
> client.testuser
> key: XXXYYYYZZZZ
> caps: [mds] allow rw
> caps: [mon] allow r
> caps: [osd] allow rw tag cephfs data=""> >
> ### Confirm the pool setting
> $ ceph osd pool ls detail
> pool 1 'cephfs-data' replicated size 3 min_size 2 crush_rule 2 object_hash rjenkins pg_num 256 pgp_num 256 autoscale_mode warn last_change 63 lfor 53/53/53 flags hashpspool tiers 3 read_tier 3 write_tier 3 stripe_width 0 application cephfs
> pool 2 'cephfs-metadata' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode warn last_change 63 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs
> pool 3 'cephfs-data-cache' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode warn last_change 63 lfor 53/53/53 flags hashpspool,incomplete_clones tier_of 1 cache_mode writeback target_bytes 80000000000 hit_set bloom{false_positive_probability: 0.05, target_size: 0, seed: 0} 120s x2 decay_rate 0 search_last_n 0 stripe_width 0
>
> #### Set up the client side (kernel mount)
> $ sudo vim /etc/ceph/fsclient_secret
> $ sudo mkdir /mnt/cephfs
> $ sudo mount -t ceph <ceph MDS address>:6789:/  /mnt/cephfs -o name=testuser,secretfile=/etc/ceph/fsclient_secret     // no errors at this point
>
> $ sudo vim /mnt/cephfs/file1   // Writing attempt fails
>
> "file1" E514: write error (file system full?)
> WARNING: Original file may be lost or damaged
> don't quit the editor until the file is successfully written!
>
> $ ls -l /mnt/cephfs
> total 0
> -rw-r--r-- 1 root root 0 Jan 21 16:25 file1
>
> Any help will be appreciated.

Hi Mami,

Is there anything in dmesg?

What happens if you mount without involving testuser (i.e. using
client.admin and the admin key)?

Thanks,

                Ilya
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux