Re: cephfs tag not working

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



There used to be / is a bug in ceph fs commands when using data pools. If you enable the application cephfs on a pool explicitly before running cephfs add datapool, the fs-tag is not applied. Maybe its that? There is an older thread on the topic in the users-list and also a fix/workaround.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Eugen Block <eblock@xxxxxx>
Sent: 01 October 2020 15:33:53
To: ceph-users@xxxxxxx
Subject:  Re: cephfs tag not working

Hi,

I have a one-node-cluster (also 15.2.4) for testing purposes and just
created a cephfs with the tag, it works for me. But my node is also
its own client, so there's that. And it was installed with 15.2.4, no
upgrade.

> For the 2nd, mds works, files can be created or removed, but client
> read/write (native client, kernel version 5.7.4) fails with I/O
> error, so osd part does not seem to be working properly.

You mean it works if you mount it from a different host (within the
cluster maybe) with the new client's key but it doesn't work with the
designated clients? I'm not sure about the OSD part since the other
syntax seems to work, you say.

Can you share more details about the error? The mount on the clients
works but they can't read/write?

Regards,
Eugen


Zitat von Andrej Filipcic <andrej.filipcic@xxxxxx>:

> Hi,
>
> on octopus 15.2.4 I have an issue with cephfs tag auth. The
> following works fine:
>
> client.f9desktop
>         key: ....
>         caps: [mds] allow rw
>         caps: [mon] allow r
>         caps: [osd] allow rw  pool=cephfs_data, allow rw
> pool=ssd_data, allow rw pool=fast_data,  allow rw pool=arich_data,
> allow rw pool=ecfast_data
>
> but this one does not.
>
> client.f9desktopnew
>         key: ....
>         caps: [mds] allow rw
>         caps: [mon] allow r
>         caps: [osd] allow rw tag cephfs data=cephfs
>
> For the 2nd, mds works, files can be created or removed, but client
> read/write (native client, kernel version 5.7.4) fails with I/O
> error, so osd part does not seem to be working properly.
>
> Any clues what can be wrong? the cephfs was created in jewel...
>
> Another issue is: if osd caps are updated (adding data pool), then
> some clients refresh the caps, but most of them do not, and the only
> way to refresh it is to remount the filesystem. working tag would
> solve it.
>
> Best regards,
> Andrej
>
> --
> _____________________________________________________________
>    prof. dr. Andrej Filipcic,   E-mail: Andrej.Filipcic@xxxxxx
>    Department of Experimental High Energy Physics - F9
>    Jozef Stefan Institute, Jamova 39, P.o.Box 3000
>    SI-1001 Ljubljana, Slovenia
>    Tel.: +386-1-477-3674    Fax: +386-1-477-3166
> -------------------------------------------------------------
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux